Skip to main content
  • IETF 117 Highlights

    IETF 117 is a few weeks behind us and Dhruv Dhody, IAB Member and liaison to the IESG, took the opportunity to report on a few highlights and some impressions.

    • Dhruv DhodyIAB Member and liaison to the IESG
    21 Aug 2023
  • Proposed response to meeting venue consultations and the complex issues raised

    The IETF Administration LLC recently sought feedback from the community on the possibility of holding an IETF Meeting in the cities of Beijing, Istanbul, Kuala Lumpur and Shenzhen, with received feedback including views that were well expressed and well argued but strongly conflicting. The IETF LLC has considered this feedback in-depth and now seeks community feedback on its proposed response.

    • Jay DaleyIETF Executive Director
    21 Aug 2023
  • Submit Birds of a Feather session proposals for IETF 118

    Now's the time to submit Birds of a Feather session (BOFs) ideas for the IETF 118 meeting 4-10 November 2023, with proposals due by 8 September.

      16 Aug 2023
    • Applied Networking Research Workshop 2023 Review

      More than 250 participants gathered online and in person for ANRW 2023, the academic workshop that provides a forum for researchers, vendors, network operators, and the Internet standards community to present and discuss emerging results in applied networking research.

      • Maria ApostolakiANRW Program co-chair
      • Francis YanANRW Program co-chair
      16 Aug 2023
    • Catching up on IETF 117

      Recordings are now available for sessions held during the IETF 117 meeting and the IETF Hackathon, where more than 1500 participants gathered in San Francisco and online 22-28 July 2023.

        31 Jul 2023

      Filter by topic and date

      Filter by topic and date

      IETF 117 post-meeting survey

      • Jay DaleyIETF Executive Director

      11 Aug 2023

      IETF 117 San Francisco was held 22-28 July 2023

      The results of the IETF 117 San Francisco post-meeting survey are now available on a web-based interactive dashboard. Thank you to all of you who responded to this survey as we use your views to continually adjust the meeting experience.

      Analysis

      We received 253 responses, of which 251 participated in IETF 117, 192 onsite and 59 remote. As only 2 of the respondents did not participate in IETF 117 the specific questions for them are not shown in the dashboard, but their views were read and considered. With 1579 registered participants, this gives the survey a maximum margin of error of +/- 5.66%.

      The results for satisfaction questions include a mean and standard deviation using a five point scale scoring system of Very satisfied = 5, Satisfied = 4, Neither satisfied nor dissatisfied = 3, Dissatisfied = 2, Very dissatisfied = 1. While there’s no hard and fast rule, a mean of above 4.50 is sometimes considered excellent, 4.00 to 4.49 is good, 3.50 to 3.99 is acceptable and below 3.50 is either poor or very poor if below 3.00. The satisfaction score tables also include a top box, the total of satisfied and very satisfied, and a bottom box, the total of dissatisfied and very dissatisfied, both in percentages. Please note that a small number of questions are on a four point scale.

      Question changes since the last survey

      For this survey we added some new questions about the venue and the onsite experience. This is in addition to the questions on accommodation that we added in the last survey. We also added some new questions specifically for those who participated online to understand more about that.

      Actions taken following the last survey

      For this meeting, we made the following changes, prompted by survey feedback:

      • Rented a significant amount of hallway seating that was delivered to the venue and distributed to key areas.
      • Provided each room with printed QR codes in a form that can be handed around. We also undertook a manual count of people in each session so that we can compare this to the Meetecho records to scope just how big a problem the non-recording of participation is.
      • Set up a standard remote participation (Webex) for the side meeting rooms and purchased remote microphones for the Owl360 A/V system already provided.

      Satisfaction

      Overall satisfaction is 4.30 which is again a good result. With some key exceptions, the satisfaction scores remain high, reflecting the various improvements made since we returned to onsite meetings.

      The table below shows the satisfaction scores for the last six meetings, along with colour coded indicators for the five point scale above: excellent (πŸ”΅), good (🟒), acceptable (🟑), poor (πŸ”΄), very poor (⚫️)

      Satisfaction scores for the last six meetings
      IETF 117 San Francisco IETF 116 Yokohama IETF 115 London IETF 114 Phila. IETF 113 Vienna IETF 112 Online
      Overall satisfaction 4.30 🟒 4.30 🟒 4.28 🟒 4.19 🟒 4.36 🟒 4.15 🟒
      AGENDA
      Overall agenda 4.16 🟒 4.18 🟒 4.22 🟒 4.06 🟒 4.16 🟒 4.11 🟒
      Sessions for new WGs 4.19 🟒 4.17 🟒 4.12 🟒 4.15 🟒 4.18 🟒 4.10 🟒
      Sessions for existing WGs 4.22 🟒 4.22 🟒 4.22 🟒 4.10 🟒 4.24 🟒 4.19 🟒
      BOFs 3.95 🟑 4.11 🟒 4.10 🟒 4.09 🟒 4.04 🟒 3.92 🟑
      Sessions for existing RGs 4.12 🟒 4.14 🟒 4.10 🟒 3.95 🟑 4.13 🟒 4.05 🟒
      Plenary 3.99 🟑 3.98 🟑 3.98 🟑 3.98 🟑 3.94 🟑 -
      Side meetings 3.75 🟑 3.73 🟑 3.81 🟑 3.73 🟑 3.52 🟑 3.46 πŸ”΄
      Hackathon 4.25 🟒 4.34 🟒 4.35 🟒 4.30 🟒 4.09 🟒 3.83 🟑
      HotRFC 3.89 🟑 3.84 🟑 4.21 🟒 3.94 🟑 4.17 🟒 3.54 🟑
      Pecha Kucha 4.15 🟒 - - - - -
      Office hours 3.98 🟑 4.23 🟒 4.00 🟒 4.09 🟒 3.96 🟑 3.91 🟑
      Opportunities for social interaction 4.11 🟒 3.72 🟑 3.98 🟑 3.89 🟑 3.51 🟑 2.79 ⚫️
      STRUCTURE
      Overall meeting structure 4.28 🟒 4.28 🟒 4.28 🟒 4.19 🟒 4.26 🟒 4.23 🟒
      Start time 4.28 🟒 (9:30am) 4.16 🟒 (9:30am) 4.28 🟒 (9:30am) 4.20 🟒 (10:00am) 4.12 🟒 (10:00am) 3.95 🟑 (12:00pm)
      Length of day 4.30 🟒 4.30 🟒 4.32 🟒 4.10 🟒 4.20 🟒 4.21 🟒
      Number of days 4.27 🟒 (5+2) 4.30 🟒 (5+2) 4.32 🟒 (5+2) 4.30 🟒 (5+2) 4.23 🟒 (5+2) 4.36 🟒 (5)
      Session lengths 4.41 🟒 (60 / 90 / 120) 4.36 🟒 (60 / 90 / 120) 4.32 🟒 (60 / 90 / 120) 4.25 🟒 (60/120) 4.31 🟒 (60/120) 4.26 🟒 (60/120)
      Break lengths 4.32 🟒 (30/90) 4.38 🟒 (30/90) 4.36 🟒 (30/90) 4.25 🟒 (30/90) 4.16 🟒 (30/60) 4.15 🟒 (30)
      Number of parallel tracks 4.08 🟒 (8) 4.01 🟒 (8) 3.90 🟑 (8) 3.86 🟑 (8) 3.92 🟑 (8) 3.92 🟑 (8)
      PARTICIPATION MECHANISMS
      Meetecho 4.35 🟒 4.45 🟒 4.45 🟒 4.23 🟒 4.36 🟒 4.36 🟒
      Gather 3.52 🟑 3.46 πŸ”΄ 3.37 πŸ”΄ 3.06 πŸ”΄ 3.04 πŸ”΄ 3.40 πŸ”΄
      Zulip 3.66 🟑 3.77 🟑 3.73 🟑 3.56 🟑 2.91 ⚫️ -
      Jabber - - - - 3.80 🟑 3.75 🟑
      Audio streams 4.02 🟒 4.21 🟒 4.04 🟒 4.05 🟒 4.14 🟒 4.41 🟒
      YouTube streams 4.32 🟒 4.36 🟒 4.25 🟒 4.22 🟒 4.25 🟒 4.41 🟒
      CONFLICTS
      Conflict avoidance 3.90 🟑 3.94 🟑 3.91 🟑 3.78 🟑 3.89 🟑 4.00 🟒
      VENUE & ACCOMM
      Overall accommodation 4.07 🟒 4.09 🟒 - - - -
      Overall venue 3.90 🟑 - - - - -
      Location 3.60 🟑 - - - - -
      Venue facilities 4.07 🟒 - - - - -
      Cost of rooms 2.87 ⚫️ - - - - -
      Availability of rooms 4.07 🟒 - - - - -
      ONSITE
      Overall 4.29 🟒 - - - - -
      Badge collection 4.69 πŸ”΅ - - - - -
      WiFi 3.98 🟑 4.06 🟒 4.10 🟒 3.82 🟑 - -
      QR Codes 4.11 🟒 - - - - -
      Break F&B 4.44 🟒 - - - - -
      Breakout seating 4.08 🟒 - - - - -
      Signage 4.22 🟒 - - - - -
      Coffee carts 4.56 πŸ”΅ - - - - -
      Childcare 4.06 🟒 - - - - -

      Areas for improvement

      Gather / Social interaction for remote participants

      This is an ongoing issue that we are struggling to address. Please contact me directly if you have any thoughts on this.

      Zulip

      Satisfaction for Zulip is still below target, but we continue to see this as largely due to a lack of familiarity and something that will change over time, rather than a fundamental issue with the product.

      Side meetings

      While we added the remote participation service, as identified in previous feedback, it does not seem to have made any difference to the satisfaction score. We will continue to watch this to see if things improve as people get more familiar with that service.

      Conflict avoidance

      We again follow the new process to reduce conflicts as described in our blog post, but again this failed to achieve any improvement in the satisfaction score. We will consider the feedback and look at other ways to improve this.

      Office Hours

      This took an unexpected and significant dip downwards, which we are in the process of trying to understand.

      Individual comments

      The individual comments covered a number of key themes:

      • The location of the venue. The feedback is that this area of town, and SF generally, are too run down and unsafe to meet in. Unfortunately, this meeting, and IETF 127 in 2026, which will be in the same venue, were both booked many years ago and deferred due to concerns about people being able to attend. In other words, we're stuck with them, but we will see how this area develops and if it looks not to have improved closer to 2026 then we may consider taking the financial hit of a cancellation.
      • Options for dining. The feedback is twofold, firstly that the hotel had poor evening dining options, and secondly that this meant people weren't able to bump into people and chat over dinner. That latter problem was particularly felt by some new participants.
      • WiFi. The feedback is that the WiFi was unreliable. This did not become clear until late in the week due to the lack of reports to the helpdesk, which may be due to us removing a physical helpdesk due to lack of usage. It appears as if this was caused by a bug either in the network kit or the network stack on certain machines, and the NOC are still tracking it down.
      • Room rates. Satisfaction with these was very low, back up with multiple comments. There were an unusual number of complications with the hotel room rates and we did not communicate these well enough, which we will correct in future. We've heard the calls for us to recommend a nearby, much cheaper hotel, but we're reluctant to do this as the only way we can stop a rate from being raised when people book, is by committing to a minimum number of rooms. However, when we do this we've seen poor takeup and in some case suffered a loss. We will look at this again to see if we can manage this any other way.

      Why people participated remotely

      For the first time, we asked people who participated remotely (Q5), why they did and if they would have preferred to participate onsite (Q5a). We heard a lot of feedback during the meeting that visas were a factor and in the survey 9 of 58 people identified them as such. We need to get data from more meetings to see how this compares.

      The major factor though, cited by 36 of the 58, was the lack funding to travel, all of whom would have participated onsite if they could have. The margin of error here is 12.31% (58 out of 674 remote participants answered this question) and so there were between 335 and 501 people who would have participated onsite with funding.

      And finally

      Thank you everyone who responded to this survey, your feedback is much appreciated.


      Share this page