Closing Plenary session
1 December 2023
BRIAN NISBET: Hello. Good morning to you all. And welcome to the Closing Plenary segment of RIPE 87. So, I am Brian, myself and Max will be chairing this session, until we hand it all back to Mirjam, safely bring it home.
Before we have our first talk, I want to announce the results of the PC elections. Thank you to all of you who participated. A special thanks to all the people who put themselves forward for the role, and we hope that those of you who weren't successful on this occasion, will put yourselves forward again at a later point in time.
So, the two people who were successful are: Franziska Lichtblau and Valerie Aurora. So thank you very much.
And I know this will be said again because I just want to say thank you very much, especially to Dymtro and Alexander whose terms end at this meeting, so thank you very much for all the work that they have done.
So, without further ado, I will introduce Daniel Wagner from DE‑CIX to talk about how to operate a telescope without playing a telescope. This is the zen talk for the morning.
DANIEL WAGNER: So, hello everyone, it is my pleasure to open this closing session, so this is a sort of the beginning of the end.
My name is Daniel. I am a researcher at DE‑CIX and I am going to present to you, I would say, the most recent research work that we did, together with colleagues from Georgia Tech, TU Delft and Merit Networks. It's titled "How to operate a telescope without operating a telescope". It sounds weird but it's doable. I will show you how.
I'll start off with, what actually is an Internet telescope? Basically, it is a chunk of IP address space that is announced to the public Internet via BGP, so it's reachable to the public Internet.
And then what's special about it is that you do not do anything with that. You announce your IP address space and you don't host any services in it, so we consider it to be dark.
And if you do that, you would not expect to see any packets. So, nobody should contact you because you are not hosting anything. So there is no purpose in contacting your network, but actually, packets do come in. And this kind of traffic we refer to as the Internet backgrounded radiation.
So, this could be cause ‑‑ this has three different causes. For example, there could be scans on the Internet, so people looking for services or hosts, maybe some systems that do plate exploitable protocols, but some malicious people could use for their actions. So for example, we have got the scanner here, this guy here. And he is just scanning the whole /0, so basically every address on the Internet, on the IP version IPv4 address space, to be exact in this study, and so they will eventually contact your telescope network. What you do with the telescope network is, you run TCP dump behind that and you just capture anything that's coming in.
This is an analogy so the reason we call it a telescope. The analogy is if you have a real telescope, the things you can use to look into the sky, if you point them to a dark space in the sky where there are no stars, you would not expect to see anything, but still you see the cosmic background radiation. So this is why we call those kind of networks the Internet telescopes.
So, what can we do with that? Why is this actually a thing?
Well, this has some security use cases. So, our network Internet security does need to know about who is scanning, so what networks are operating such scans, this sometimes an educational network, but it could also be malicious. What ports are being scanned. By "ports", I mean the transport protocol port, so that is somebody looking for, let's say, a Telnet port, port 23, we'll see those kind of very prominent ports later on.
How many scanners are there and who are they actually? What network and what kind of person is behind this.
The thing why this is interesting, is that usually malicious people spoof their source address to say carry out attacks, but when a malicious actor needs to know where systems are, they need to get the response so they rather not spoof their source address so because they need the response back.
This helps to gain insights and will lead to insights into attack vectors, what is currently being prominently used on the web. So what new services are coming for that, for example, can be used for launching DDoS attacks and in the end you have this kind of knowledge, you could prevent such attacks.
Now, the kind of idea that we had, this overall scheme is ‑‑ there are these three sources, namely the misconfigurations, could be misconfigurations like DNS services misconfigured pointing to a wrong IP address which happens to be in your telescope, so you will set resolvers trying to contact you, just an example. You have got scanners out there and attack back scatter.
The idea here is that we, as an IXP operator, it if we leverage our vantage point in the Internet which is somewhere there, we will see those kind of scanning activity going through the Internet to basically all networks on the Internet. So, these kind of networks could be the ones that are actually operational telescopes, and we collaborated with them. You'll see after. And this is the rest of the Internet also, and what this figure does not show too nicely is that it also terminates inside these ‑‑ inside the Internet backbone.
So, since we see the traffic that is going towards the telescopes and to all the other networks, we could derive certain characteristics from what the telescopes do see, and then infer where there are more of those networks that do see the exact same kind of traffic patterns.
So, as said, we are collaborating with three operational telescopes, and we were using or basically inferring the kind of traffic characteristics that telescopes usually do see, and with this, we develop a methodology that can be applied to any network in the forwarding backbone of the Internet, in our case it was IXPs, but could also be ISPs, to derive more address space on the Internet that is not a telescope or not necessarily. And those prefixes are subnets that we could infer scanning traffic from for we refer to Meta telescope prefixes. And then with that we try to find as many of these Meta telescope prefixes as possible which in the end gives us a huge set of prefixes for which we know they receive scanning activity, and this effectively gives us a very large telescope without operating a telescope.
Traditionally or typically if you operate a telescope, you have one prefix. And the bigger the prefix is, the more scanning traffic you will see, the more insights you might get. But still, one prefix is located in ‑‑ well, hang on. First thing is, even a scanner knows about your telescope prefix they can just block list this prefix, so that they will no longer scan it, meaning that the attacker knows or the malicious actor knows that telescope can no longer infer what I am scanning, so I basically make them blind to what I'm interested in and they can't get back the knowledge to prevent my attacks.
So, once your prefix is being block listed your telescope is pretty much useless. This is why those prefixes are being kept secret.
So, if you have one prefix, it belongs to one AS, and sometimes scanners are interested in scanning data centre networks rather than educational networks for example. If your telescope is located in an educational network for most universities that we collaborated with, this is true, they do not see the kind of scanning activity that's going towards data centres. Now as we have got lots of these Meta telescope prefixes, well they are basically in every type of network.
And lastly, some networks that are located in a specific country are being scanned more than others, or differently than others. Meaning that if you have one telescope that is, say, in the Netherlands, you will see likely Netherlands scanning traffic but not the scanning traffic of, let's say, China. Once again in our case we can for scanning traffic that goes to Chinese networks, you will see the map afterwards, the whole word, meaning that we have no such constraints.
And this is why this is actually a cool thing in academia.
All right, so, first of all, we need to get the characteristics from the traffic. We were collaborating as I said with three telescopes. Two of them are in Europe in different countries. We refer to them as TEU 1 and TEU 12. There is another one in the US.
Those telescopes operate TCP dump so we get the full packet captures that give us insights. Those telescopes have different prefix sizes so they are mixed. And we were looking at a 24‑hour period somewhere this April.
First of all, when weapon investigated toes PCAPs we found the obvious one. Telescope does not host anything so it's not sending anything so we don't have any outbound traffic here.
But what's coming in is more interesting. This is where we start with. First of all, we find that most of the traffic depending on the telescope, it was about 90% of any incoming packet would be an MTTCP SYN. These are scanners trying to establish a connection on a certain port trying to figure out what service could be running behind this port, behind this IP address.
And those packets consist of a 20 bytes IP header and a 20 bytes TCP header, but some of them have an option set which sends another 8 bytes to the packet size. Those options can be used for fast re‑opening and other things that attackers might be looking for. So this is why we did, because we didn't exactly know what packet size we should now use for instance, and this is why we did a sensitivity analysis on the packet size. The details are in the paper.
And the turns out that the average is exactly in the middle, between 40 and 48 bytes. So it's 44 bytes, with a we use for our threshold. Coincidence? I don't know.
Next thing is we were looking at the amount of packets that a telescope receives. This depends on the size, the larger telescope prefix is the more packets you will be recording. Per /24 and per day, we found that this is 1.7 million packets, on average, given our three telescopes that we work with.
Lastly, well that's it, there is more coming after.
So now we have those characteristics of scanning traffic. And we do now apply this to our IP dataset. We have 14 IXPs in our study. They are spread across Europe, North America and one in Asia. There is peak traffic volumes. We were having a sampled flow data, dataset of the same time. So it's, once again, a 24‑hour somewhere this April.
So, here is a pie plain that we basically had. On the left, you will see those characteristics once again. We added for filters to narrow down this flow data to find the Meta telescope prefixes. We were looking at TCP only. Unfortunately, we could not check for the TCP flags, which would help us to find the TCP since, but we knew that 90% of all is TCP since, so that was okay to neglect.
Then, we have got our magic byte threshold. We are not looking for any outbound traffic and we were further filtering out reserved IP addresses because they can't be a telescope, right. We also required the telescope to be globally routed. You have to announce your prefix. We were applying our packet count threshold.
So, if you look to the right, I'll get this figure here, so we started off with some 6 million, /24 subnets in the IXP dataset. By successively applying the filters, we were down to about 370,000 Meta telescope prefixes. We found some 800 something can be more /24 NATs that violated any of those filters. And if they violated the no outbound filter, well, then it's a grey net. Grey net means that some portions of the network appear to be active and other part of the /24 appears to be dark. However, we were looking for those /24s, the real DartNets. So these are the DartNets for which every data point we had qualified as exactly scanning traffic.
So, of those 380 K that we saw, because it's now down to 318 K, this is because so far we have just been seeing what we were measuring but we can't really tell whether we are just measuring scanning traffic or whether the prefix that we're looking at is actually inactive. So we were using external data sources, namely Census, NDT and ISI, to further eliminate work that those data sources told us that these blocks are actually being active. We considered those ones we had before to contain false positives, and this is the one we can say that the external data sources confirmed we are actually on the right track.
So, this table shows us the stats for all the individuals sites. As I said we have got 14 IXPs. If we combine them we can find that there are fewer Meta slope prefixes surviving our filtering compared to looking at certain sites individually. This means that the more knowledge of different IXPs we combine, the more precise we can get. The false positive rates here is higher than here. This is the lowest we could have, about 11.something percent, details are in the paper.
Further, we find that the largest IXP, which is CE1 finds Meta telescope prefixes in over 200.countries. This is awesome. This is like equivalent you would need to get 200 different prefixes in 200 different countries as a researcher for example, or enthusiast or whoever, this is basically undoable.
And then we found that the second largest one finds Meta telescope prefixes in the most different ASes, which is over 8,000. It's about 9,000, almost.
So, so far we have scanning traffic inferred. All right, cool. But is this actually correct? To somehow get an idea of how precise our work has been so far, we were looking at the address space of a telescope that we collaborated with, so this is ground truth and we're figuring out how many of those blocks we could identify are actually located inside the prefix that we know is a telescope.
And this is how it looks like. This is a Hubbert curve showing cul rised blocks for the ones we inferred. These are /24 subnets. The white ones we don't have any data point for. All they are active or disqualified so we couldn't make a claim or claimed that it's not a telescope block and we find that the ones we inferred are clearly inside the boundary. And while I was browsing through this Hubbert curve, I had the whole portion address space, you could clearly see those kind of patterns. Like this looks like a big fog, which is telling me Daniel look at this, there is a block or a larger region that contains only inactive addresses. And then I map the boundaries of our telescope on top I thought okay, looks good. And there are some blocks inferred to be dark outside the boundaries, which is okay, because not all blocks on the the Internet need to be dark, right need to be active. To they are dark actually or could be dark.
So, as said, we have got two more blocks that were quite catching my eyes. For example, we have another known telescope, which is pretty huge, but we didn't collaborate with. And I'll tell that you these three big blocks are the boundaries of the telescope. So I could have also have drawn this that way. And then there appears to be another block but we don't know what's going on there.
And then we found a very large chunk of unused IP address space somewhere in the Internet. So those kind of patterns, they are like really popping my eyes.
Well, all right. So let's see where they actually are located. We used maximum geolife 2 to map those prefixes to country codes and then with this, we generated this kind of heat map, world map, I'd say.
So, the findings here, from our dataset, is that most of these Meta telescope prefixes are located in the US. Then, to our dataset, the second most populated Meta telescope prefix country I would say appears to be China. This could be because we do not really see the traffic for Chinese networks, but it could also be that they are not active to the outside, or they are just unused. So, different reasons here.
We also find that Africa tends to have the fewest ‑ prefixes. This could also be due to limitation of our visibility of the traffic so we could not tell anything. Or this could also be that they just have to use what IP addresses they have.
So next up we were comparing those ports, like this is the applications that are being scanned for. On the left table you can see what we got from the PCAPs, from our telescopes we collaborated with. On the right you have got the table for what we inferred with our methodology. Overall we find that port 23 Telnet, SSH port 22 and certain web services are the most prominently scanned.
So, in green, I underlined what is the overlap. So we find that we are incidentally somewhere close to what the ground truth is saying. And what is more interesting is that one of the telescopes that we collaborated with was configured on the ingress router to drop any incoming packets on port 23 which is Telnet and port 445 for security reasons. Well we don't have such filters, we do just observe what the Internet is telling us and we find those ports to be parental in our dataset the most prominent port. So we could go ahead and call TEU 1, say, and I know you are going to be scanned on port 23 very heavily.
Then we did the same kind of analysis for network types. So we were looking at what network types are being scanned in what fashion on what ports most prominently. First off, ISP networks they dominated where we found the Meta telescope prefixes to be located in. But still we could find things like data centre networks are being scanned at port 80 or 88 more than other ports inside a data centre network and also compared to other networks this is not in scale. But the take away is still visible here.
This is something you could not have if you just had one prefix located in a single network. And then once again, for a country, a continent, we found, for example, which I just want to highlight here quickly, is that in Africa, for the day that we were scanning, or analysing, we found that in Africa, parental there was a campaign going on in port 317215, I don't know what the service is behind that port. Once again such insights are not possible if you have your prefix in the Netherlands or in Germany or wherever.
So, I should be summing up. In conclusion, we found that there are lots of /24 blocks inferrible as so‑called Meta telescope prefixes around the globe in all network types, and as I said, the inference that we presented is basically applicable to any kind of network that is somewhere in the background, and in the backbone of the network, the Internet that is carrying IBR.
And with all of those insights and whoever is interested in leveraging some of these, this helps to improve Internet security research overall.
So, I think I have got a couple of minutes for discussion. I am very happy to take any questions. Thank you very much.
MASSIMILLIANO STUCCHI: I don't see any questions, I didn't see any online the last time we checked.
DANIEL WAGNER: If I may I have got some ‑‑ please go ahead.
AUDIENCE SPEAKER: Daniel Karrenberg: And current RIPE NCC staff member. Speaking for myself. And of course the whole thing was ‑‑ you know who I'm. You were looking at packet tracers from ISPs, were there any privacy concerns? How did you deal with them?
DANIEL WAGNER: We had to deal with them. So, this is the third time I gave this talk, and this is the first time somebody raised that question. And this is the third time that my boss told me watch out for that question.
All right, jokes aside. So, what we did is kind of first of all, we were anonymising the data on /24s, so we do not have any individual IP addresses so we do not know what the individual behind the IP address. That being first said. Then, second thing is, this is kind of somewhat a live analysis, so packets going in, going to be analyse and after what's discarded. We don't have those traces any more. I have got the results and the plots but I don't have any of this data any more. Then it's sample data, it's aggregated IPFix that's used for statistics reasons. Lastly the type of traffic that we are inferring, the kind of scanning traffic is something that the subject, whoever might be behind that block, didn't request. So this is just like garbage in his front door and we are looking at it.
DANIEL KARRENBERG: You asked for it again. Science is not science if it cannot be reproduced so if you throw away your raw data, basically this is worthless.
DANIEL WAGNER: It's not. It's not. Because you can apply all of those things to your own dataset and see that it works. Trust me, I double‑checked.
MASSIMILLIANO STUCCHI: Brian, do we have anything online?
BRIAN NISBET: Apologies, yes, we do.
PETER HESSLER: I see that there is an announcement on IPv4. Did you have a chance to do a similar analysis on IPv6 and I would be especially interested to how how they compare because IPv4 is densely utilised, whereas IPv6 generally sparsely utilised.
DANIEL WAGNER: Right. So for our analysis, we did not yet ‑‑ I have to say not yet, but there is some ‑‑ we do have certain plans to apply our analysis to IPv6 and we were curious to see what's the outcome. So, generally, we would have ‑‑ we have to tweak our parameters for the analysis. As you say, this connectivity is much lower, apparently. So we need to adjust our packet threshold thingy to be more like on spot what the telescopes are actually saying the ground truth is telling us. As I say, we did not yet perform that kind of analysis but we expect... well, rather not so interesting insights. Connectivity NAT version 6 is way different and is not used that much on the net. What I I think we would have as a result is basically a very, very dark and boring large IPv6 address space. My experience ‑‑ my expectation.
PETER HESSLER: Yeah. I think that would be the current expectation, but I'm also curious if you could collect this data now and then again in five years and again in ten years etc., and see how this changes over time.
DANIEL WAGNER: If we are legally allowed to store the data for that period so this is a kind of process we have to go through. Technically it's possible. Just to see a long term evolution of the IPv6 address space scanning activity theoretically possible, so if you ‑‑ well, I will have to check with my colleagues if we can get something like this arranged but theoretically and technically it's possible, I'd assume. Depending on the data volume that you have to store.
PETER HESSLER: Thank you.
GERT DÖRING: This is interesting stuff that you bring. Peter prompted me to ask that question because he said densely populated and what you seem to find is that there is large chunks of the IPv4 Internet which are not populated. So maybe rumours about IPv4 exhaustion, exhaustion run‑out are slightly exaggerated. So this is actually something that really wonders me how much of the v4 space is actually still sitting somewhere waiting for the IPv4 price to right to €100 per IP address.
DANIEL WAGNER: This is a question back to the community if we want this. I don't know. But yeah, I was about to put this on the slide as basically a question is, is this wanted? Is this what we want? Is this like ‑‑ are we aware of this? Is anybody aware of that many blocks being inactive? They are allocated ‑‑ please come again, was this like yes, I am aware. So people are agreeing. So yeah, here we have got the number. Here is a rough number of like what the situation appears to be. And then the question is how should we deal with that? Should we somehow transfer them back and make use of it? It's up to you to discuss.
MASSIMILLIANO STUCCHI: We have no more questions, so thank you very much.
Now we have Menno with the tech report from the RIPE NCC.
MENNO SCHEPERS: Good morning, everyone. This is the technical report for RIPE 87. We have a group of ‑‑ so, we have ‑‑ you have seen probably a group of techies running around. Here's a picture of a selection of engineers that have been helping setting up the meeting. The meeting starts on Monday, but we already arrive on Thursday, I did at least, and a couple of my colleagues, and we started setting up the network on Friday.
Then, also on Friday, more colleagues arrived and, together, we started setting up the network, we set up the presentation system you see there, and all the wi‑fi access points. Already in the weekend, the network was used for several Board meetings that happened, so it was important for us to get everything up and running as soon as possible after we arrived.
This is a list of things that we bring to make this meeting work. I'll show you some pictures later of the equipment, some of the equipment. But first, let's have a look at the network that we had running here this week.
This is the topology. You see that the COLT is our uplink provider. They provided us a 10‑gig link where we got a full table from. You see also here how we have set this up, the slides are online, you can download them if you want to have a closer look. Here is the physical network topology. We have the COLT uplink, we have a Juniper switch to which we have connected all of the other switches and little switches you might have seen lying around in the rooms.
These are other switches. A lot of cascading. This had to do with the hotel infrastructure. There were some challenges here. There were many broken sockets, and one very annoying thing was that they have painted the walls, including the sockets, so we couldn't read what number was on the socket. Here, you see me testing with a colleague on another end to see if we could figure out what port was going where. But, apart from the ports not being labelled properly, many of them were just broken, so, fortunately, though, there were some usable ports, and we made use of those. And that's why you see those switches on the wall, that we would then connect all of our access points to.
There is also missing documentation. But also there was a lot of documentation in the patch rooms that were outdated. So we looked at it, but it was pretty useless.
And then the tripping circuit breakers, you might have noticed on Tuesday, that we had power issues. The hotel couldn't tell us why, but looking at this spaghetti, we might have an idea.
Also, they made it very difficult for us to reach our equipment when they put all the cleaning stuff in front of the patch cabinet, but yeah, we managed. We had to make some of our own cables, which is a very dangerous task.
AUDIENCE SPEAKER: Maybe you should invite somebody from the hotel to listen to this.
MENNO SCHEPERS: We'll send them a video for sure. You see here, how dangerous it is to be an engineer at the RIPE meeting, but, you know, somebody has to do it.
The cable that you see here is going to the bar area, some of you rightly so mentioned that there was no coverage there in the beginning of the week. So we went there and made sure it happened. There were no patches in that area so we had to run a cable that was over 100 metres, so we put a switch in between, and eventually got it to work.
This is not us. This is the hotel doing investigations on the power outages, but I don't know if they are the cause of them or if... we don't know.
Here you see the circuits that we took pictures of. So that we knew at least what things look like when everything is working, so we could fix problems ourselves, and we didn't have to ask the hotel for help.
This is a map of the access points that we installed throughout the hotel. As you can see, the number 3 and 4, that is the breakfast area here, also part of lunch, we didn't cover that, but the other lunch area was covered, and you could see a lot of people made use of that there as well because there wasn't much space to sit for people elsewhere, the coffee break areas, they were quite small here downstairs, there were no tables etc., but I think everyone managed to find working Internet because also, I don't know how it was in your room but my room had no Internet and some people had a little bit of Internet.
Here is the wi‑fi network graphs. Compared to the previous RIPE meeting, you see that there is a drop in the dual stack network usage, and an increase in IPv6‑only, but what is also interesting is that there is an increase in 2.4 gigahertz usage. I'm not sure entirely why that is.
Also, we made a change. So the IPv6‑only network is now a pure IPv6‑only network. There is no NAT64 any more. That has been replaced with this. So, if you want to experience how it is to be on an IPv6‑only network and no trickery to get IPv4 content working, then join this, and as mentioned here surprisingly usable but there are still websites out there with only IPv4 DNS records and GitHub is an example and Twitter as well.
So, about the legacy network that we have. It's the dual stack network, and that's the only one that had 2.4 gigahertz.
AUDIENCE SPEAKER: But you have also got the situation where a host computer might think it's got IPv6, when it hasn't. We were wondering why people were joining that, as some of them joined with 5 gigahertz devices and we think that they should perfectly well be capable to work on the RIPE MTG network, like the main network, but for some reason people joined the legacy network. We don't understand why, but they might have had a random issue and they fixed it by joining the legacy network. We would like to know why, if you have any reason, maybe it's something we can work on fixing or at least it's nice to understand why people join the legacy network.
Then, it's time for a new acronym. We have many in this community
STENOGRAPHER: Tell me about it!!!!
MENNO SCHEPERS: DDR, what does it mean ‑‑ that. Dance dance revolution. Also, not. Double data rate? The DDR memory? No.
It means discovery of designated resolvers. There is an RFC for you to read on your flight back if you are bothered, download it before boarding, so the discovery of designated resolvers, it helps clients to get the information for encrypted DNS, so that they can do DNS over HTTPS or DNS over TLS, and this is something that Windows and macOS support and something that we also wanted to offer here, but it requires a TLS certificate, so we went out there to get the certificate. But we need an ASN entry with the IPv6 address, and so we wanted to do this with Letsencrypt, but of course they don't support IP address validation, or I mean they don't support an IP address SAN.
So, we went to DigiCert, which we ‑‑ quoted us $1,057 for a year, but we thought this was way too expensive so we went to the next out there, CERTEGO and they offer a certificate for $149 per year but when we try to get this certificate, we got an error, error minus $1. So, we contacted support and then support said: Well, we kindly request you to go further with the order without adding IPv6 to the sum. So okay, thank you, but we'll shop somewhere else.
Then, we got to SSL.com, €177 per year, and they support IPv6. But IPv4 now was causing trouble. You'll see that we add the those three records and the cache and the v6 addresses, and when adding the IPv4 address, we got an error and we couldn't proceed adding it to our card. But, after contacting support, they told us of another way we could do it. So we managed to eventually get this certificate, and we installed it and started to make use of it.
And we saw in the meeting an increase of DNS over HTTPS which was good, but we also saw some issues. Mac users could have seen this when they browsed to GitHub, maybe some other websites too and it was a bit weird it says because it was blocked which RIPE MTG. We are not blocking anything on purpose. So let's ‑‑ we went to have a look, closer look at this.
What we did were tomorrow some tests and some of you helped out as well by chasing settings in macOS but some things worked for a short while but the problem often just came back and you you got the error back again that Safari couldn't open. We turned off DDR, and, as you can see, it dropped and the DNS requests, the regular DNS requests grew again, and that helped. The problem was solved but we still don't know what's going on. So if anyone has an idea, let us know. Otherwise we'll try and figure it out. But hopefully in the next RIPE meeting we know what's going on. And hopefully there is a fix.
Then, this is a slide from previous RIPE meeting as well, it's an explanation of our presentation system, because some people are interested in knowing how that works. The magic is happening over there. We have two MacMinis running there with an HDMI switcher and when the presentation is up there now, we can prepare the next presentation from there. And with the switcher we can seamlessly switch in between.
We don't have like a lot of PDFs in sequence ready, because some of you have live demos, and there is also people from Meetecho that present remote, so it's ‑‑ this way we can support everything like that live demos etc.
There is also a timer. I have zero seconds left it says here. That means that I need to quickly go through my following slides.
That is just some statistics. These are in the slide‑deck so you can have a look at them and we can also look back in previous years to see what was happening. So here you see, well, cold cables, a 10‑gig link, we're not anywhere near that. So that's good. So we don't have to ask for a 100 gig next time.
Then, DHCP leases. These are lower than like before we had, because now we have IPv6 mostly network, and most people are on there, but these are v4 leases from people.
The wi‑fi clients, it's also a graph. You see how busy it was, and you see the lunch breaks in here. Maybe a little bit you can see maybe the ‑‑ a certain BoF at some point, but it's not that clear this time.
Here is my last statistic and it's people joining remotely on the Meetecho platform. You see IPv6, 63%, which is, I think, okay, but it should be higher, of course. I am curious in six months from now what we'll see at the next meeting. OS usage, browser usage, it might be interesting statistic for some of you.
And then thanks to these teams as well: web team, the stenographers team, the Meetecho team and the conference coordinators, and, with that, that's the end of my presentation.
BRIAN NISBET: I'm going to close the line right now. Was this a coordinated move? Anyway, please...
ERIK BAIS: Thanks for all the great work, even though there were some challenges. As engineers, we all know you need to bring your own plasters. If not, calling in 250 people from the Red Cross for the clean‑up, you know... anyway, I would like to present you some plasters if you need them because it's probably easier.
BENEDIKT STOCKEBRAND: The reason with the 2.4 gigahertz use might be, at least if I remember correctly, what I noticed was that at some point when I first turned, switched the wi‑fi, it was the only one I could reach, either because 2.4 has a better range might be because the other stuff might not. Once people set that up, they stick with it in a lot of cases, that might be the reason for that.
MASSIMILLIANO STUCCHI: We have a question, but a remark: "At least one of the microphone queue cameras showed some video garbage for a couple of seconds each time video sharing in the Meetecho started. Not sure this can be resolved." This is something for Meetecho.
AUDIENCE SPEAKER: Hi. So, I would like to confirm for the 2.4, what the person previously said but I want to say thank you so much for the IPv6‑only network, I was the one that mentioned it in Rotterdam. I am really happy it's here, and now we can start making a list of apps that don't work on IPv6‑only. But thank you so much for this experiment and I think it's making a change in understanding what works and what doesn't. Thank you.
BRIAN NISBET: I am just going to say that it's not a terrible thing that Twitter isn't reachable on that network. Just block it from all the other ones, that would be in the next meeting, cool.
AUDIENCE SPEAKER: I saw you used ‑‑ Maksym Tuliev, Netassist. I saw you used certain Raspberry PIs; for what?
MENNO SCHEPERS: The raspberry PIs are used for the televisions that you see, not in this room but outside we show some slides or moving images. But we also use them in the ops, in our ops room where there are also stenographers there and they also have two screens there with Raspberry PIs connected to it that show what's going on here. Instead of having a lot of laptops or MacBooks we use these smaller devices for various tasks that we have.
BRIAN NISBET: Nothing else online? One last question.
AUDIENCE SPEAKER: Hello. You said that you have a full view from COLT, which is the only uplink, why not the... you said that you have a full view from COLT, but it's as your only uplink, why don't you have address it for COLT?
MENNO SCHEPERS: We have a full table. So this is because we wanted to RPKI and to do the RPKI filtering, we want to have the full table.
BRIAN NISBET: Thank you again for all the work that all the teams have done.
I think that this has probably been one of the most challenging environments that you have had in many, many years, so well done.
Okay, so we now have a report from the Code of Conduct team. So, Sebastian:
SEBASTIAN BECKER: Welcome to the report of the Code of Conduct team. We did have three reports submitted so far, two by e‑mail and one was reported in person. One has been assessed and is seen as being concluded. The other two are pending and in assessment. So, this is a report so you get an idea of what happened during the meeting.
The one that's concluded, the assessment group was formed. So this is how we proceed with that. We reached out to the subject of the report and the party that the report was about. The incident involved a breach of the RIPE Code of Conduct, and the two parties agreed to solve that between themselves.
Any further personal details will not be shared. This stays within the Code of Conduct team and obviously between the two groups. And about the two others, these are still pending so we cannot add any more information here.
How to report:
Just as a reminder how that works. So if you do experience a behaviour that makes you and our others uncomfortable, please speak with us. You can inform the whole team by the form, you'll find that on the link here. You can mail to the Code of Conduct team. You can reach to us in person by just going to one of the members, which are shown on the next slide. And you can e‑mail them, the template for that is firstname [dot] lastname [at] coc [dot] ripe [dot] net.
And finally, that's the team currently. There are some persons on that team, but we are looking for more volunteers to have a wider group of people that can handle such reports and deal with stints if they happen.
So feel free to contact us if you want to know about more ‑‑ about the process, how to join, please visit the website, the ripe.net/coc. And that's it.
MASSIMILLIANO STUCCHI: Please...
AUDIENCE SPEAKER: Hi. Harry Cross here. Just a question. Is there going to be a chance we can hear about the outcome of the two that are still being assessed because I just think that would help to confirm the effectiveness of the Code of Conduct process more than anything else?
SEBASTIAN BECKER: Yes, we will go on and do a final report later.
AUDIENCE SPEAKER: Thank you, I really think they should be resolved by the end of the conference to ensure that there is sort of an outcome, if possible.
SEBASTIAN BECKER: That's if possible. Obviously we try, but I don't think that's in any case possible.
AUDIENCE SPEAKER: Sander Steffann. Thanks a lot to the whole team because this is not an easy job, it's a lot of responsibility. So first of all, thank you for that. And at some point in the future, I'm not putting you on the spot right now, but it would be interesting if ‑‑ to see if there is a trend up, down, or if it's just all different types of violations. Like are there specific areas that we need to pay more attention to? We have very few date points with only three complaints so I guess it's kind of hard but I would be interested to see if we, as a community, could do better in some ways.
SEBASTIAN BECKER: We obviously keep, for certain time, some records. I am not sure if that will give, as you explained, a lot of data or interesting reports afterwards, and hopefully there will be not so many.
SANDER STEFFANN: I'm really happy to hear that the case you talked about was handled amicably between the parties. That's definitely a good thing to hear.
SEBASTIAN BECKER: That worked pretty well.
MASSIMILLIANO STUCCHI: We have a remote question.
PETER HESSLER: I was actually going to comment on that. I'm very happy to see that the Code of Conduct was used in a way that was not an attack on a person and was not a punishment on the party who was the subject of the report. And that even though it was brought to the Code of Conduct team, the parties were able to resolve it amicably. And I just wanted to highlight that was a very nice thing to see.
SEBASTIAN BECKER: Thank you.
AUDIENCE SPEAKER: Jordi Palet: I understand that we cannot know all the details of every case, but I think it will be good for the community to understand what was the breach or how it was perceived because that helps the community saying I am doing something wrong and I have not noticed it, I should avoid that.
SEBASTIAN BECKER: Yes, but obviously it's hard to go into details without then revealing any more specific or personal information.
AUDIENCE SPEAKER: Denesh ‑‑
BRIAN NISBET: Sorry, because we have a question here, which is in the same vein, from Antonio Prado:
"Is it possible to know in which categories the infringements are?" Again, it's a similar request for shareable information, but just to add to that.
SEBASTIAN BECKER: To that part, basically the resolved one was more of a misunderstanding, and that could be solved by basically talking to them and making them aware about that.
AUDIENCE SPEAKER: Denesh Bhabuta, I'm talking in a personal capacity here. So thank you to the COC team for the work that's been done so far. I have a question in terms of the evolution of the COC, and have you had any thoughts on the things that have been done so far and how to make them better for the future?
SEBASTIAN BECKER: Yes.
AUDIENCE SPEAKER: Let me expand on that. Are there any ‑‑ well, are you able to expand on your thoughts?
SEBASTIAN BECKER: Not that easy because, for me, it's exactly the first time I'm in that and on that meeting, so, my experience is exactly mostly for this meeting. But I think we can follow up on that.
AUDIENCE SPEAKER: Daniel Karrenberg. I would caution us against asking too many statistics and details and stuff from the RIPE Code of Conduct team. I think this is something that is best not done ‑‑ evolving the Code of Conduct and giving recommendations on what the community should do is best something that's not engineered by the Plenary. We should trust the Code of Conduct team to come back to us if they, by their judgement, something needs to change. I'm as curious as anyone about what's happening, but I don't think it's appropriate for them to share more than they have shared. Thank you.
AUDIENCE SPEAKER: Valerie Aurora, speaking for myself. Just a quick note that, if you do want to know the details of what's happening at the Code of Conduct meetings, you can join the Code of Conduct committee.
SEBASTIAN BECKER: But you are not allowed to reveal them then.
MASSIMILLIANO STUCCHI: Thank you.
And now, before we hand it over to Mirjam for the closing, Brian, come over for a tradition. We have to take a selfie.
I'll hand it over to Mirjam for the closing remarks.
MIRJAM KUHNE: Thank you. Thanks for running the last Plenary session for this meeting and now we are nearing the end. This is the Closing Plenary. I am Mirjam, the RIPE Chair. Niall, who is Vice‑Chair, is online at Meetecho and is participating all week and he has been a great help in the background, I miss him here dearly, but he has been very active in the background to support us all.
Just to give you some statistics. We had, in total, 728 attendees checked in. Of those, 557 were on site and 171 online. On Meetecho, we had 463 unique participants, of which ‑‑ and also we had 148 newcomers, 112 checked in on site and 36 online.
If you look at the distribution by country. We had people from 58 countries represented. Mostly ‑‑ Italy didn't quite make it to the top. Germany is still on top but Italy was the second. But it's good that there's just a broad distribution of people from various countries participating here on site.
Online, we also have quite a large distribution from 41 countries, and maybe it's natural that Italy isn't there on the list because they are all here, hopefully. So we'll see others there on the chart.
I mentioned this at the Opening Plenary, we had also, for the first time, we had local hubs, we had six local hubs this time. Basically means they all came together in a room or in a university or some office to participate in the RIPE meeting together. It's a bit like the Eurovision Song Contest, I guess, you all gather together and then watch the RIPE meeting together. And we had some really nice feedback from them. Someone sent us a feedback saying that I was really helpful and it was very much appreciated to be able to get together and socialise with some other participants and that it enabled them to actually participate for the first time in the RIPE meeting in a meaningful manner. That was really nice to hear that feedback.
We have some impressions here from some of the hubs. I don't know why Vesna is there in Estonia, but I think this was basically a screenshot from the Estonians on their screen and so Vesna did the hub in the Netherlands, I suspect.
But there were also others and I am really pleased to see that engagement. We got a look at this and evaluate it a bit more and see if it was useful and if we want to repeat this, what we can improve in the future.
These are the NRO NC election results. We had elections this week for the numbers resource organisations numbers Council to fill the seat that James Kennedy had vacated because he joined the RIPE NCC and I am happy to announce that Constanze Burger has been elected.
Just to show who you are, Constanze.
CONSTANZE BURGER: I am overwhelmed, so thankful to be a part of this community. Thank you for the trust you gave me, I am really happy to present you and I will give you all my skills and knowledge to present your needs in the NRO NC Board. I am really happy to work together with Herve and Sander, and with you all. I am open for discussions talks, and a beer, and special thanks to Randy, he nominated myself and Tahar, and Sebastian, they supported me, and for my CEO, thank you because I am able to do this. Thank you so much, I am thankful.
MIRJAM KUHNE: Right. And we have some more people to thank here again. The RIPE Code of Conduct team, they did a great job this week, and I am glad we are seeing ‑‑ this has been a normal part of the RIPE meeting also the transparency reports at the end, as Sebastian said, we are always looking for new volunteers, I was happy we could add more people to the team this time and over time making this a larger team so that not everybody has to be at all the RIPE meetings. So that's the idea of a large team and we'll see who is available online, on site. If you are interested, come back to the Code of Conduct team or come to talk to myself, and I'll give you more information.
Now, we had some Working Group Chairs who were leaving their roles as Working Group Chairs. Kurtis Lindqvist, the Working Group Chair for the RIPE NCC Services from day one, in 20 years, he has stepped down now. He said: "Finally, this is it, I am not running again, find someone else". Luckily, that was possible.
We also had Joao Damas who has been in this role in the DNS Working Group for a very long time. His term ended. And Benedikt Stockebrand from the IPv6 stepped down as the Chair, the co‑chair of the IPv6 Working Group. Thank you all.
I wanted to mention also Ondrej Filip had stepped on last time as the MAT Working Group Chair, there is also there was an open call for volunteers for the Open Source Working Group co‑chair and that's been discussed on the mailing list right now.
We have a number of new faces. Incoming Working Group Chairs, we have Doris as a co‑chair for the DNS Working Group; Janos, for the NCC Services Working Group to replace Kurtis; Alex, who kindly stepped in to kind of replace James, who had to step down halfway in his term, so Alex is helping out for now, but there will be an open call and another selection before the next RIPE meeting for the Address Policy Working Group. Then we have Christian, who was selected as a co‑chair for the IPv6 Working Group. And David was selected as the co‑chair for the Database Working Group just after the last RIPE meeting, so we didn't actually have him here on the slide last time, so welcome, David.
So that means this is the full list of Working Group Chairs right now. All those people have done a fantastic job in putting together the agendas of the Working Group. I think we had a good agenda and discussions in some of the Working Groups and some Working Groups were experimenting a bit, not filling the time with presentations but also leaving some space for discussion, and I think that worked out really well.
You see here all the Programme Committee members who were responsible for the agenda of the Plenary talks this week, and there are also two that are leaving the Programme Committee. And thank you very much Alexander and Dymtro for doing a great job over the last time.
We will miss you on the Programme Committee. But we also have two new members on the Programme Committee, as Brian has already announced. One of them is Valerie Aurora and the other is Franziska Lichtblau, who will come back to the PC after a break, basically.
I'm not sure how we're going to do this with all the gifts. We'll wait until the end with the PC because we have some other winners.
The tradition is to award some people who have been, like, quick in registering and we do a raffle amongst a few groups. So now, this time, we have a raffle amongst the local people, the local participants. So these are the winners here. If you are in the room, please come up and receive a small gift as appreciation to be participating here and be quick in registering.
Kajal and Wan‑Min will be very active in organising the RIPE meetings from now on, so you'll see their faces.
So we have two newcomers, Steven Davidson and Katerina Lionta, if you are here in the room. Thanks for being a newcomer and for registering for this RIPE meeting.
Some of the gifts are the books that some colleagues from Namex have kindly contributed as gifts for our participants.
And then, importantly, also after the newcomers talk on the Monday, we did a Kahoot test, or a Kahoot quiz. There were two people who won and the person is RSC 224. We don't know who you are, so please find us if you'd like to receive our gift and we can hand it or send it to you. And the other one is Katerina. I don't see her. We may find her later. That was a fun quiz. Newcomers, it's good to see newcomers participating so actively.
There is some feedback to fill in. Please provide us feedback about this meeting and about RIPE meetings in general. And if you have missed anything, there is another selfie that Max and Jan did earlier this week, they are the links to the meeting archives, the presentation archives for all the presentations and also a daily meeting blog that the RIPE NCC keeps the main highlights of the meeting, it's really fun to read.
And then lastly, I would like to thank all our sponsors. Obviously the hosts from Namex, and then the RIPE NCC contributed to not only in staff resources but also in financial resources. There was Oracle and VeriSign and IPv4 Global, Cisco, NTT, the Internet Society and COLT as the connectivity sponsor. So thanks to all the sponsors, they contributed a lot.
Before I move on, actually, I want to maybe give all the PC members your gifts, otherwise it gets a little out of hand here. So if the PC members who are still here and who are in the room would come up so we can see them, also those who have, who stepped down this term.
The Closing Plenary is always a bit ceremonial, so...
Thank you all. And of course a huge thank you to the organisation team, Sandra, I don't know if you are in the room, yes, there you are, thanks, Sandra, she did beyond what you can do...
And of course also the colleagues from the events organisers, the organisation team, the tech team, everybody who has contributed to making this RIPE meeting possible.
So, now I would like to introduce the next RIPE meeting. RIPE 88 will be in Krakow in Poland and we have a short presentation here from the local hosts. It will be Akamai next time, who will host us next time in Krakow in Poland. And then I want to close this ‑‑ he is coming up ‑‑ I just want to close this with our new RIPE logo that we have, you have seen already before in the Community Plenary, so from next meeting on, what you will also see that new logo on the slides.
So thank you all. This was all from me. I'd like to give the floor to the local host for the next RIPE meeting to say a few words, and then I wish you nice lunch. Maybe one note: the lunch, this time ‑‑ we only have one lunch room, that's the one when you come up to the right, the larger breakfast room, that's where we'll have lunch, and I wish you a great trip home and I hope to see to see you all again next time.
PATRICK BUSSMANN: Hello, all of you. I'll promise this is only going to be half an hour to 45 minutes sales pitch, maximum. For those of you who don't me, my name is Patrick Bussmann, I am speaking with my Akamai hat on, I think for the first time this week. I am trying to differentiate by wearing the right hoodie. I have the pleasure of inviting you to Krakow, and I wanted to say two or three sentences because that might be a confusing thing, to see a German standing up on the the stage, in an American company, inviting you to Krakow, Poland.
So, bear with me for two seconds there.
Akamai has a 13‑year history with over 1,000 employees. That's our biggest office outside of the US, and we nearly have all engineering departments there. So, we have a huge heritage there. We are highly invested, we are highly investigated in the local women and diversity groups, we are invested in the university programmes, we are invested in the academia exchange. We actually have classes powered back, Akamai, on the local universities and we're active in the NOG organisations there, so you are going to see a lot of more local faces and not the Americans and the Germans on stage there.
That's all I'm going to say about Akamai. And then I think my job here is to make you interested to come to Krakow. We want to all see you there.
So why Krakow and why did we think Krakow would be an amazing place to have a RIPE meeting?
The combination of historical, traditional proud heritage with a modern, thriving university city, a city that is highly invested into technology, while ensuring work‑life balance by driving one of the greenest and family‑friendly cities that we have ever seen, and an active day‑ and nightlife where you can basically spend your whole night evolving through the city from whatever entertainment part you would like.
We thought this fits well with the values of the RIPE community. We wanted to give back. We wanted to see you all there. We wanted to introduce you to Krakow. So, two or three recommendations and then I'll leave with you that.
Recommendation 1: As one who has been to Krakow on a very regular basis and who has deeply missed that while being out on Covid, arrive early, go late. There is so much to see, you are going to enjoy it. You have, just as an example, two UNESCO world heritage on your fingertips and you have plenty to go through the city.
How to prepare: It's well‑connected to most of Europe. You can fly, take a bus, a train. English is well understood in most places. The currency is not yet euro and, I am going controversial there, I fully acknowledge that. Here is where we are. As a German, I want to point out you can use a credit card nearly in every place.
It was important for me because I was approached by a couple of people throughout the week, I wanted to be very clear on that. Krakow, from everything that I have seen, from everything that the company has seen, from everything that all the diverse groups that we speak to on a regular basis, is a very open, young, thriving, safe and diverse city and we will welcome you with open arms and make your life and stay there very enjoyable.
So, with that, I hope to see you all in Krakow next year, we'll welcome you there. If I can do anything for you, please reach out, I am happy to see you. Thank you.
MIRJAM KUHNE: Thank you very much. I am really looking forward to this and I promised you that you won't see me again. I forgot one important thing and it's actually a big big thank you to the local host. We also had a token of appreciation for the local host that the events team had organised, so I don't know who is here from Namex, if you could come up and get some gifts handed to you. The whole team is here.
SPEAKER: Just to say a few words. I would like to thank all the RIPE community and especially the RIPE NCC for the amazing job. Thank you so much, we were really happy to host you and we tried to do our best. I hope it worked. Thank you so much.
A special thanks from me to Sandra on your side, because we worked with her and she did an amazing job. Let me thank my wonderful team, Alexandra, Luca and our newcomer, Marta, thank you so much.
Also, let me thank Marco, or technician, he was back to work, and Flavio, that got sick, but that ‑‑ say hello to you.
See you in Krakow.
SPEAKER: I just forgot that we have a present for the RIPE NCC staff. It can only be pasta for you. We also have a timer for the pasta, because ‑‑ well, above Rome, people get too much time for the pasta cooking, so, you are to follow the rules, so please follow the rule. This is the tool, and then...
Thank you so much, Mirjam.
MIRJAM KUHNE: At the very end, I would like to also do a big thanks to the stenographers.
STENOGRAPHER: You're welcome. I am bloody wrecked!
MIRJAM KUHNE: I can imagine. There are many other communities who are very jealous of our stenographers that we have had here for the RIPE meeting for so many years and they are looking at us with envy.
STENOGRAPHER: Don't tell them about us.
MIRJAM KUHNE: I hope this has been useful for your participation. Thank you.
LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC