Skip to main content
  • IETF 117 Highlights

    IETF 117 is a few weeks behind us and Dhruv Dhody, IAB Member and liaison to the IESG, took the opportunity to report on a few highlights and some impressions.

    • Dhruv DhodyIAB Member and liaison to the IESG
    21 Aug 2023
  • Proposed response to meeting venue consultations and the complex issues raised

    The IETF Administration LLC recently sought feedback from the community on the possibility of holding an IETF Meeting in the cities of Beijing, Istanbul, Kuala Lumpur and Shenzhen, with received feedback including views that were well expressed and well argued but strongly conflicting. The IETF LLC has considered this feedback in-depth and now seeks community feedback on its proposed response.

    • Jay DaleyIETF Executive Director
    21 Aug 2023
  • Submit Birds of a Feather session proposals for IETF 118

    Now's the time to submit Birds of a Feather session (BOFs) ideas for the IETF 118 meeting 4-10 November 2023, with proposals due by 8 September.

      16 Aug 2023
    • Applied Networking Research Workshop 2023 Review

      More than 250 participants gathered online and in person for ANRW 2023, the academic workshop that provides a forum for researchers, vendors, network operators, and the Internet standards community to present and discuss emerging results in applied networking research.

      • Maria ApostolakiANRW Program co-chair
      • Francis YanANRW Program co-chair
      16 Aug 2023
    • IETF 117 post-meeting survey

      IETF 117 San Francisco was held 22-28 July 2023 and the results of the post-meeting survey are now available on a web-based interactive dashboard.

      • Jay DaleyIETF Executive Director
      11 Aug 2023

    Filter by topic and date

    Filter by topic and date

    IETF 115 post-meeting survey

    • Jay DaleyIETF Executive Director

    22 Nov 2022

    IETF 115 London was held 5-11 November 2022

    The results of the IETF 115 London post-meeting survey are now available on a web-based interactive dashboard. Thank you to all of you who respond to this survey as we use your views to continually adjust the meeting experience.

    Analysis

    We received 288 responses, of which 286 participated in IETF 115, 236 onsite and 50 remote. As only 2 of the respondents did not participate in IETF 115 the specific questions for them are not shown in the dashboard, but their views were read and considered. With 1424 recorded participants, this gives the survey a maximum margin of error of +/- 5.18%.

    The results for satisfaction questions include a mean and standard deviation using a five point scale scoring system of Very satisfied = 5, Satisfied = 4, Neither satisfied nor dissatisfied = 3, Dissatisfied = 2, Very dissatisfied = 1. While there’s no hard and fast rule, a mean of above 4.50 is sometimes considered excellent, 4.00 to 4.49 is good, 3.50 to 3.99 is acceptable and below 3.50 is either poor or very poor if below 3.00. The satisfaction score tables also include a top box, the total of satisfied and very satisfied, and a bottom box, the total of dissatisfied and very dissatisfied, both in percentages. Please note that a small number of questions are on a four point scale.

    Satisfaction

    Overall satisfaction is 4.28 which is again a good result. With only a few exceptions, the satisfaction scores are the highest they have been in two years and those exceptions are almost all from IETF 113, which appears to have benefitted from a bump in satisfaction due to people being able to meet in-person for the first time in two years. The changes that have had a positive impact on satisfaction include:

    • Fine tuning the meeting structure including:
      • starting earlier at 9:30am (thanks to those who noted that I initially had the start time wrong in the survey)
      • scheduling a longer day
      • providing a wider range of session lengths
    • Ongoing investment in Meetecho
    • Better agenda planning

    The table below shows the satisfaction scores for the last six meetings, along with colour coded indicators for the five point scale above.

    Satisfaction scores for the last six meetings
    IETF 115 London IETF 114 Phila. IETF 113 Vienna IETF 112 Online IETF 111 Online IETF 110 Online
    Overall satisfaction 4.28 🟒 4.19 🟒 4.36 🟒 4.15 🟒 4.13 🟒 4.20 🟒
    AGENDA
    Overall agenda 4.22 🟒 4.06 🟒 4.16 🟒 4.11 🟒 3.91 🟑 4.04 🟒
    Sessions for new WGs 4.12 🟒 4.15 🟒 4.18 🟒 4.10 🟒 4.03 🟒 4.00 🟒
    Sessions for existing WGs 4.22 🟒 4.10 🟒 4.24 🟒 4.19 🟒 4.04 🟒 4.18 🟒
    BOFs 4.10 🟒 4.09 🟒 4.04 🟒 3.92 🟑 4.01 🟒 3.87 🟑
    Sessions for existing RGs 4.10 🟒 3.95 🟑 4.13 🟒 4.05 🟒 3.99 🟑 4.10 🟒
    Plenary 3.98 🟑 3.98 🟑 3.94 🟑 - 3.91 🟑 4.03 🟒
    Side meetings 3.81 🟑 3.73 🟑 3.52 🟑 3.46 πŸ”΄ 3.84 🟑 3.22 πŸ”΄
    Hackathon 4.35 🟒 4.30 🟒 4.09 🟒 3.83 🟑 4.14 🟒 4.10 🟒
    HotRFC 4.21 🟒 3.94 🟑 4.17 🟒 3.54 🟑 - -
    Office hours 4.00 🟒 4.09 🟒 3.96 🟑 3.91 🟑 4.12 🟒 3.96 🟑
    Opportunities for social interaction 3.98 🟑 3.89 🟑 3.51 🟑 2.79 ⚫️ 2.90 ⚫️ 3.11 πŸ”΄
    STRUCTURE
    Overall meeting structure 4.28 🟒 4.19 🟒 4.26 🟒 4.23 🟒 4.08 🟒 4.20 🟒
    Start time 4.28 🟒 (9:30am) 4.20 🟒 (10:00am) 4.12 🟒 (10:00am) 3.95 🟑 (12:00pm) 3.01 πŸ”΄ (12:00pm) 3.96 🟑
    Length of day 4.32 🟒 4.10 🟒 4.20 🟒 4.21 🟒 3.93 🟑 4.12 🟒
    Number of days 4.32 🟒 (5+2) 4.30 🟒 (5+2) 4.23 🟒 (5+2) 4.36 🟒 (5) 4.14 🟒 (5) 4.26 🟒 (5)
    Session lengths 4.32 🟒 (60/90/120) 4.25 🟒 (60/120) 4.31 🟒 (60/120) 4.26 🟒 (60/120) 4.12 🟒 (60/120) 4.17 🟒 (60/120)
    Break lengths 4.36 🟒 (30/90) 4.25 🟒 (30/90) 4.16 🟒 (30/60) 4.15 🟒 (30) 4.09 🟒 (30) 4.16 🟒 (30)
    Number of parallel tracks 3.90 🟑 (8) 3.86 🟑 (8) 3.92 🟑 (8) 3.92 🟑 (8) 3.60 🟑 (9) 3.58 🟑 (9)
    PARTICIPATION MECHANISMS
    Meetecho 4.45 🟒 4.23 🟒 4.36 🟒 4.36 🟒 4.29 🟒 4.30 🟒
    Gather 3.37 πŸ”΄ 3.06 πŸ”΄ 3.04 πŸ”΄ 3.40 πŸ”΄ 3.77 🟑 3.90 🟑
    Zulip 3.73 🟑 3.56 🟑 2.91 ⚫️ - - 3.90 🟑 (trial)
    Jabber - - 3.80 🟑 3.75 🟑 3.68 🟑 3.85 🟑
    Audio streams 4.04 🟒 4.05 🟒 4.14 🟒 4.41 🟒 3.84 🟑 4.22 🟒
    YouTube streams 4.25 🟒 4.22 🟒 4.25 🟒 4.41 🟒 4.09 🟒 4.37 🟒
    Onsite network and WiFi 4.10 🟒 3.82 🟑 - - - -
    CONFLICTS
    Conflict avoidance 3.91 🟑 3.78 🟑 3.89 🟑 4.00 🟒 3.76 🟑 3.73 🟑

    Areas for improvement

    Gather / Social interaction for remote participants

    The satisfaction score is for Gather is Poor at 3.37 and the score for "Opportunities for social interaction" for remote participants was even lower at a Very Poor 2.58. To be entirely open, we know that while Gather is the only opportunity we provide for social interaction for remote participants, it is a poor experience in a hybrid setting, and we are struggling to identify any better alternative, or in fact any alternative worth experimenting with. We have researched physical solutions including telepresence robots and telepresence stands but the scale would only allow a fraction of our remote participants to use them. We have also researched telepresence room technology, but the asymmetry of one location with many participants and many locations with one participant would likely make that a very poor experience.

    We are going to continue to try to identify and test alternatives and will consider the feedback in the survey about setting up a long-running MeetEcho room or some mechanism for people to create ad-hoc meetings. In the meantime please contact me directly if you have any further suggestions.

    Zulip

    Satisfaction for Zulip at 3.73 is similar to that for jabber, the service it replaced, but our goal has to be for it to be a significant improvement over jabber and that to be reflected in the satisfaction scores. For now we are treating this as a familiarity issue and will aim to support people making the switch while expecting results not to improve much for a few more meetings.

    Side meetings

    The satisfaction score for side meetings at 3.81 is at the high end of this score for the last six meetings, but still notably lower than we would like it. In each post-meeting survey we receive feedback about how to improve side meetings, with those in this survey largely focusing on two issues - the size of the rooms compared to the number of participants, and the issues of side meetings being listed separately from the main agenda. We will look at improvements in these areas, but it should be noted that the IESG has considered this area a number of times and as recently as June 2021 published a blog post setting out its views on the overall approach to side meetings.

    Conflict avoidance

    Over the last few years we have invested in a new tool to automate the building of an agenda that satisfies any number of pre-defined constraints and we're now aiming to improve the generation of these constraints using the data supplied in the post-meeting surveys. As you might expect, this is a complex process that requires involvement from the IESG, IRTF and WG chairs, which so far this has not moved the needle on the satisfaction score but we are now implementing a more detailed process for considering community feedback. The number of conflicts is of course closely related to the number of sessions per day and the number of parallel tracks and it may be that we have reached the limit of what we can achieve without adjusting either of those.

    Number of parallel tracks

    There was a bump in satisfaction scores for the number of parallel tracks when we reduced from 9 to 8 but we are still only receiving an Acceptable rating. Any further changes can only be achieved with a major change to the meeting structure, which requires careful consideration.

    Plenary

    The satisfaction score for the Plenary has hovered around the Acceptable/Good boundary for the last six meetings, despite our attempts to improve it by shortening some parts, increasing the transparency and communicating more in advance so that the Plenary does not become a confrontational session. We would like to improve it to a consistent Good rating but that will will take some time as we need to do more in-depth research with individual community members to understand what they want from the session.

    COVID management

    We received a total of 32 reports of COVID infection during IETF 115, 13 during the meeting and 19 in the three days following it. It is of course impossible for us to identify the source of any infection, particularly as the meeting took place in an environment where all COVID management policies have been withdrawn and very few people take any personal precautions in public. This is a higher proportion than for IETF 113 or IETF 114 but the community is notably better at reporting to us infections in the immediate aftermath of the meeting and so we may not have good enough data to make an inter-meeting comparison.

    Looking at the survey data, I need to start with an apology for a gap in the survey - while we asked onsite participants and those who did not participate at all, their views on the COVID policy, we didn't ask those who participated remotely and so the results will not incorporate views from those who participated remotely because the onsite policy was not to their liking.

    While there are multiple comments in the survey from people who strongly disagreed with the masking policy, the data (albeit limited as above) provided a relatively clear result (Q24b) that those views were in a minority:

    • 44% would have participated onsite whatever the policy
    • 39% lean towards the policy staying as is or being stricter, made up of:
      • 31% might not have participated onsite if the policy had been any looser
      • 8% definitely would not have participated onsite if the policy had been any looser
    • 18% lean towards the policy staying as is or being looser, made up of:
      • 12% might not have participated onsite if the policy had been any stricter
      • 6% definitely would not have participated onsite if the policy had been any stricter

    When it comes to planning for IETF 116 we are probably going to be constrained by local requirements but this data will be useful for IETF 117 if COVID is still at the level where we need to consider a management policy.

    And finally

    There are too many individual comments for me to address in detail in this post but please be assured that we will consider those in our planning wherever possible. These include:

    • Allowing multiple one-day passes
    • Not siting the terminal room in a remote location
    • Providing more information about the different WiFi networks we operate
    • Making the meeting QR codes easier to find
    • Better space management for side meetings

    There were a few suggestions that we unfortunately cannot adopt, such as providing meals, but thank you for those anyway.


    Share this page