RIPE 87

Archives

PLENARY SESSION
28 NOVEMBER 2023
2PM


DMYTRO KOHMANYUK: Thanks, be everybody. The front rows are quite empty, use them if you can. This is installment of RIPE meeting with an opening presentation by Jayasree Sengupta, web privacy by design: Evaluating cross layer interactions of QUIC, DNS and H3.

JAYASREE SENGUPTA: Thank you, and good afternoon, everyone. So, I begin the, today's topic of ‑‑ today's topic on web privacy by design.

So, as the figures shows we all know that unencrypted DNS resolutions using DNS over UDP is traditionally used and it is the most common form that's used in today's Internet, so ‑‑ but web communications are encrypted as this but the DNS still remains largely unencrypted which can actually release a lot of information about the users and, therefore, attackers might actually be able to gain a lot of insight into the user profiles and also create user profiles in the long run and therefore recently the browsers have actually started to offer encrypted DNS options and they actually allow us to use DoH which is an encrypted option, but as we know that both DoH and DoT, they are constrained by several factors such as the head of line blocking problem and also the multiround trips so just to avoid this kind of problems, DNS over QUIC is the ‑‑ DoQ actually gets over the disadvantages that DoH and DoT have by offering the multiplexing support and by also using H3 which avoids the multiple handshakes, therefore it's potentially faster.

So, even when using DoQ and H3 we realised the improvements are pretty much uncoupled, so what we try to analyse through this talk is how to improve the web browsing experience with inbuilt web privacy by design.

So, as I already mentioned, with QUIC in place, what we can actually do is we can use the underlying QUIC connection for DNS resolution over DoQ as well as the web content delivery using H3 over the 0 RTT that's supported by QUIC so every ‑‑ as a result of which what happens is that every H3 request to the same web server, also we reuse the same connection for the fresh H3 request as well so as a result of which what we gain is that it offers optimisation potential as well as not only the web communication ‑‑ I mean the web communication also becomes private and faster so this is the figure that we actually try to propose in this, so we see that in spite of using unencrypted DoUDP the DNS resolution happens over encrypted DoQ and we actually reuse the same QUIC connection for communicating to the web severer so what we see is, we actually, we actually have the ‑‑ we actually coalesce the connection on the server site so we can reuse the QUIC connection.

We will go to the methodology in the next set of slides to explain it better. What we tried to do here is we tried to again analyse what is the impact of reusing the same QUIC connection on the web performance, as well as how does it impact both the fixed and the mobile access network technologies?

So, this actually shows our ‑‑ this actually shows our methodology and what we do here is that ‑‑ what we do here is we decouple the DNS resolution on the client side and we, we have the DNS as well as the H3 server running in the same process on the server side so we actually use the Linux network name spaces to create this and then to create this set‑up and we then use measure all the metrics that we are trying to measure here.

So, for this set of experimentation we actually look into three different website or web pages, one is a basic HTML page and other HTML with JavaScript and we have HTML with JavaScript and cookies embedded in it.

And apart from this ‑‑ so for all of three websites we tried to analyse it under different network conditions that as we have the fibre, the cable, the DSL, the 4G as well as the 4G medium and we use the FCC measuring broadband dataset for the fixed broadband data connection and wireless access technology. We make certain changes to the core DNS and the chromium as well in order to set up our measurement set‑up.

We also normalise the data points based on the scenarios delay or the roundtrip.

When it comes to the measurements, so we actually perform three different measurements, so first, we try to analyse how QUIC interacts with DoQ and H3 in general so if we look on to the graphs, so the measure takeaway from this is the performance of DoQ and H3, 1‑RTT are majorly syncronised, round trips where we show distinct steps and each are as a result of the different access technologies that we try to measure.

Next, what we tried to do is analyse the same thing and the scaling capability of QUIC while interacting with DoQ and H3 under different network conditions, are so here, we are showing both the extreme network conditions that is the fibre scenario as well as 4G scenario and we realise the connect times do have a longer tail end of percentiles and one has relatively large left field from minimum to the 20th percentile and so what we actually summarise from here is that the processing delay is large for the lower RTT access technologies while in absolute terms the processing delay is same for high RTT access technologies so it weighs in much less relatively, resulting in observed differences being small between the H3 0 RTT and set‑up.

For the next set of experiments what we tried to analyse is what is the overhead of DoQ and DoH in comparison to the DoUDP which is considered as the baseline. Here we see the steps that are occurring as a result of the different access technologies, so the major takeaway is that DoQ and DoH they do not exhibit the expected number of round trips so only DoUDP does and in general DNS exchange range of DoQ is one roundtrip faster than DoH which is pretty much expected. Next, we again see how the DoH exchange variation varies or scales over different network conditions and we again show it for the fibre scenario ‑‑ 5G scenario as well the 4G scenario, the takeaway is the lower RTT access technologies delivered longer ‑‑ increasing delay.

So, finally, we actually move into comparing our proposed set‑up which is using DoQ and H3 over 0 RTT and we tried to compare it with different variations of the protocols with different variations that as we also compare it with how DoQ performs with normal H3 1‑RTT set‑up and how DoH per phenomenons and all of these is compared with respect to the DoUDP baseline so what we see ideally here is that DoH has the highest relative increase across all forms of web pages and the web pages are actually ordered in terms of their complexity along the A axis whereas the access technologies are also ordered in terms of their delay.

So, we see that the example page actually shows the highest relative increase amongst all of the web pages and we also see the relative increase for the week peed I can't page is generally greater than the Instagram page which is mainly due to its complexity and there is a slight exception to this case, that as we can see for the the Wikipedia page actually shows results which are pretty much comparable to the baseline that is DoUDP and we see the performance of the access technologies is in order of respective round trips but these also has a few ‑‑ this also has a few exceptions to it; for example, DSL actually performs worse for the example page in case of DoQ with H3 1‑RTT and the other exception here is that DSL again ‑‑ fibre actually performs worse compared to DSL for DoH, so what we learn from this slide is that the emulator DoQ with H3 0 performs best across best all web pages, it replicates pretty much similar performance to the baseline when we see for the fibre scenario under the Wikipedia page. So this we actually see with respect to the median increase so in the next slide we actually dig deeper and we see it for all other, we see in general what is the general trend of the relative increase of PL T and here, we see that for all access technologies the example page has a shorter left and Wikipedia page has comparatively longer left tell where as the Instagram, and when we compare the difference between the protocols we see pick peed I can't has the largest where as Instagram has the smallest and if we compare DoQ with DoH we see that the scale with roundtrip except for a few situations and the difference between each 3 0‑RTT and 1‑RTT doesn't scale with the round trips so what we basically learn from this set of slides is both dimensions have an effect on the relative increase over the baseline. With increasing delay between the current ‑‑ between the client and the server introduces the potential time savings of 0 RTT while savings for using DoQ and instead of DoH increases. So ‑‑ and the last set of the measurement what we try to analyse is how the median relative increase across the protocol combinations are with respect to the DoUBT baseline so here for each protocol combination we actually have 15 data points and one for each web page and access technology and we see that DoQ with 0‑RTT, which is what we emulate here actually matches with the baseline for one of the comes as we had earlier shown that is basically the fibre scenario for the Wikipedia page and the median set‑up is slightly higher, where as DoH has relative increase of approximately around 15%, so ‑‑ and it's interesting to see that even in the worst case scenario, the DoQ plus 0‑RTT still performs better than other two competitors, so what we learn from here is that when we are using H3 1‑RTT.plt for DoH it gets inflated by greater than 30% for the fixed line and greater than 50% for the mobile and then we are trying to use our emulated set‑up which is DoQ with 0‑RTT introduces the PLT by one‑third of a fixed line and by half over mobile when we compare it to the existing normal DoQ with 1‑RTT set‑up.

So that pretty much brings us to the end of this discussion and here we actually point out that there is scope for improvements because there is certain limitations in our study so, for example, the presented findings represent an emulated set‑up where we try to decouple the DNS resolution from the web page browsing in case of the client side and the, we have to actually refine it to make it work better, and then the measurement set‑up also is limited currently to web pages having a single DNS resolution and it's also implemented with a single H3 web server and therefore it can be extended to web pages with more than one DNS resolution, where we can also emulate across traffic network conditions and finally websites with several DNS resolutions should actually have a scaling factor which also needs to be analysed.

So, this brings us pretty much to the final overall takeaway, and as we have seen, the final takeaway is that the DoQ with 0‑RTT, which we tried to propose and we have established that this is actually the best option encrypted communication on the Internet because it actually shows that we have a better performance because it reduces the page load time for both the fixed line as well as the mobile.

So, yeah, that is all and I am open for questions.

(Applause)

DMYTRO KOHMANYUK: Thank you. Well, thank you, that's insightful, we have two mics here that both work, please feel free to queue and state your affiliation. You can rate any talk, that one, the future ones, the past ones, and that would help both PC and the presenters to know how well they were received. Or maybe I will check with Jan if we sees any remote questions. Thanks, sorry for that. Thank you. It's very insightful, I am happy that DoQ is taking over, thank you.

Now for the usually present but unfortunately not present today and still ‑‑ Geoff Huston with exact title, Starlink and LEOs, law enforcement officers, I assume. Welcome.

GEOFF HUSTON: Hi, everyone. Good afternoon, I am sorry I can't be with you in Rome, my poor body is going not another long haul, I find myself back here in Australia recording this. Why am I recording this? Because, quite frankly, I don't want to see myself doing a talk at 2:00 in the morning, and either do you. I figured it was better if I recorded this during daylight hours my time so at least I will sound slightly coherent.

What are we talking about today? We are talking about Starlink.

There's been a revolution, I think, in the connectivity area over the last few years, it's actually been in this sort of revitalising of that old satellite system and Starlink has been the tool to do that. So, what I want to talk about is what's so special about Starlink, some of the physics of it, I am going to look at today how today's Internet protocols perform across this medium. There's no doubt that Starlink is generating a lot of interest, and part of it is that it's actually possible to use hand‑helds and actually connect in to orbiting spacecraft. As long as the space craft is long enough, there's enough power in the budget and you are not after hundreds of megabits per second because handhelds can't do that. In Australia Optus has signed up. In America it's been T‑Mobile. There's deals going on in Japan, all kinds of places.



So it's kind of big news. Now, why Starlink? WhyLEOs? Well, the physics of LEOs predates Arthur C Clarke by about 300 odd years. Newton worked out that if you get slightly higher than the earth's surface and fire a rock sidewards, with a magic speed, and I am not sure how he measured it, furlongs per fortnight or something, but today in metric terms the magic speed is 11.2 kilometres a second. And if you neglect the effects of friction in the earth's atmosphere, if you fire something at 11.2 km sidewards you are just high enough to get above all the earth's mountains it will go around and around forever.
Slightly faster, off into space. Slightly slower, it's going to hit the ground.

So Newtonian physics suggests that if you fire it fast enough it will just stay there.

Of course the real world is somewhat different. There is an atmosphere but there's something else and it's called solar radiation. The good news is, for you and me and every other piece of biology, we have a molten iron core which is rotating, unlike Mars that doesn't. The strong magnetic field deflects solar radiation, so all of a sudden we are sheltering under the Van Allen Belt which means the level of radiation which hits you and me and is much, much lower, it allows us biological life forms to be sheltered by its effects as well.

The Van Allen Belt has two belts; there's the outer belt, and a smaller inner belt. And what you are trying to do with orbiting satellites is sneak it in under this inner belt. There's this area just above the top of the stratosphere, 160 odd kilometres up there, up to around 2000 kilometres, which is this magic zone sheltered by the Van Allen Belt but also of course above the earth's atmosphere, so it's not generating heat and friction as it bounces off the atmosphere. Starlink is running at 550 kilometres, Hubble was at 595. It's kind of a world populated spot in so many ways.

The Leo belt is 160 to 2000 kilometres, that's high enough to slow it down by grazing but not so high you are outside the Van Allen Belt, it's higher than the legal definition of outer space by almost all the world's countries, countries can't agree when outer space starts but variously it's between 100 and 160 kilometres, depends on the country, above that altitude you really are in out of space, you are not operating within any national jurisdiction, which is important I think in this satellite land grab that's going on in the Leo space.

You are also in what is ostensibly a vacuum which means electromagnetic radiation travels 50% faster than what it does in fibre, 550 kilometres to get there and back takes a mere 3 .7 milliseconds. Even when they are relative low in the sky at 25 Greece, it's still only 7 .5 milliseconds to actually bounce a signal off that satellite and back again. So they are close, they are fast.

How much does the satellite itself see? Usable elevation of around 25 degrees ‑‑ not worth doing. But what a satellite will see is kind of a circle on the earth with a diameter of 1,800 kilometres or a radius of 900, which,, if you remember your basic geometry, is around 2 million square kilometres is visible to that satellite at any point in time. If you look at the surface of the earth, do some basic maths, you will figure out if you could evenly space your satellites you only need about 500 to cover the entire earth's surface and that's wishful thinking of course, you want better than that. And for high quality coverage you will need between 6 and 20 times that number to actually do this. But in essence, what's going on now is Starlink has launched, up until a few weeks ago, 4,276 of their spacecraft in orbit. And if you go to this rather NATTy little website you can see a simulation of the entire Starlink constellation. They are in a belt, they are not in a polar orbit, they are in an inclined orbit of around 56 degrees, and a small number of them are in polar orbits to service some of the remote orbits,. Those trains on this simulation, recent launches they pop out in the spacecraft in a densly packed train and over the ensuing weeks they space out and become an evenly matched sequence, so that website has a beautiful simulation of what's going on.

They are interesting. Why are they interesting? Well, they are really close to the earth which means even mobiles can actually raise a signal and send them a signal because they are not far away, they are not like a geostationary at 38,000 kilometres away. If you have a big enough antenna and this is too small, you need something biggish, not that big, likeable I think is the appropriate term and lagable you can get some signal speed and you can get very, very good performance. The other thing is they are quite low in the sky so it's very hard for a third party to try and jam them, and so in areas which are hostile and the Ukraine was a good example, Starlink services were ideal for communication services which were actually relatively challenging to Jam with conventional jamming techniques. But on the other side you needed a lot of them, there were decent servers across the entire earth, you are going to need a lot of them.

That's always been a major barrier. That was the reason why Motorola's Iridium went bankrupt almost a day after it finished its final launch, it cost them too much to launch all this stuff and they were never going to make their money back so off they go into bankruptcy.

But SpaceX changed all that. The rocket comes back down to earth, it has no humans in it, so you refuel it and send it off again, and all of a sudden the launch cost to get things up which conventionally is around 10 to 20 dollars a kilogram into orbit. SpaceX's the Falcon 9 was doing it at 2,200 dollars a kilo, and this is even lower at around 1,700 dollars a kilo. So all of a sudden there's a new economics of pushing things out into orbit and it's dramatically cheaper, and that makes a huge difference on what you can do and how many you can push up.

Now, these are filings that have been lodged with the ITU‑T, some of these are fantasy, some of them really are, but these dreams are pretty amazing, you know, there's SpaceX which is doing direct retail, one web which is trying to be a wholesaler, Amazon is getting close to launching in Project Kuiper, there's the Chinese projects and a bunch of others, the combined total of all of these is around 90‑odd thousand in the current filings so there's almost a bit of a satellite grab going on in through to get orbital slots or altitude slots in the Leo space over the coming years and there will be a lot of them up there.

They are also getting cleverer. The original satellites, even the Geostation Reese were mirrors in the sky, you send a signal up and it gets bounced down again, what that means is that every user has to have an earth station somewhere in that same sort of radius of reachability. They have to be no more than about 1,000 kilometres apart, preferably closer so you have got to dot with earth stations but no more. With the current generation of Starlink, some of them are equipped with inter laser links and with Jen 2 inter laser links so you can send the signal up and then across and down where there is an earth station. You don't have to have a mirror, you can actually do routing between spacecraft. And what that has allowed if you look at some of Starlink's coverage maps we are claiming they can cover all of Australia, even halfway between Sydney and broom, get a decent signal and it will work. They have done this in the western part of Mongolia and I found a trace route from one of them that says to get from Mongolia out in the west where there is very little air, not even a mobile tower, but there is power, across to Japan and back again using inter satellite links, around 90 milliseconds which is probably faster than doing it with fibre. And it's not a bad service. So, you know, it is appearing now and it is a thing.

So, let's move from that to very quickly the issues of satellites and signals. They are low, they are moving very fast, 27,000 kilometres an hour, Horizon to Horizon, five minutes flat. So, they are quick. And here is a simulation if you looked up and looked at the hemisphere above your head, almost at any point on the earth, as long as you are not at the pole you will see a picture a bit like this, where the spacecraft are actually tracking across the inner circle is that 25 agree angle of inclination and what you are seeing is, moving across as the earth rotates and satellites move past, at any point in time somewhere between 30 and 50 satellites in view. So there's a rich sort of picking for an earth station and a user to pick from to use as a relay.

And they will traverse across that inner one every three minutes.

But they don't use three‑minute switch, they use a faster switch. Starlink schedules a receiving station on earth to a satellite in 15 second increments and it you can see that if you look at the latency and this is actually a latency across 20 of these 15 second intervals where if you look at the minimum latency and you get your bracketing right you can actually see the switch from service to service to satellite to satellite to satellite because they will have subtly different latencies and on top of that is the standard latency variation you get to see with Starlink. So, the switching is, you will get a semi‑stable service for 15 seconds, then it will quickly switch to an entirely different service with an entirely different RTT as a result and again in 15 seconds you are going to get another switch. So, the characteristics in latency is a lot of low magnitude sort of jitter and every 15 seconds a larger jitter component and that's what protocols have to deal with.

Now, before we move away from this let's also look at the actual satellite itself. They actually run an array of trance ponders so they have 2000 megahertz of channel, each at 250 meg, three down, one up, that's 8 beams and two polarisations so they have got 48 down and 16 up. If you look at what you are seeing is a set of these beams focused as they move across the earth's surface. The implication is interesting because if you look at a fixed point it will see the first of these beams and then it will transition to the second of these beams and then to the third. So, as it moves across the sky, the signal to noise ratio will get better and worse, better and worse, and better and worse. And as you probably remember from Shannon's law the signal to noise ratio has to do to do with available capacity of that system, so the carrying capacity of these circuits will vary continuously.

Now, you can see this if you actually connect up to a Starlink modem and I have done so and I have brought up the G RPC tools and it reports, amongst other things, the instantaneous, the capacity, the uplink throughput and the ping latency to their POP in milliseconds. So if you do this often enough and report the results, here are graphs that look a lot like this. On the right‑hand side is latency, for one hour, over across a one‑hour period and you will see that Starlink sort of varies between around about 30 milliseconds to a little over 60 most of the time but it's never stable; it's constantly changing, as we said, both in the small and every 15 seconds it will change in a large leap.

Because of the constantly varying signal to noise, the reported capacity of the service will also change continuously, and here is a plot of the reported capacity over a one‑hour period. You notice that most of the time it's between around 130 to 180 megabits per second for downlink but occasionally when you get if you throttle with multiple beams helping you out and in this particular case there was just one interval where it said hey, I can do 900 megs, that's my capacity. So on the whole load, don't get your hopes up, somewhere between 80s up to 200 is much more likely.

If I blow it up a little bit and look at just 300 seconds or five minutes, you start to see even in a very fine grain scale you get this extraordinary amount of variation in both the available link capacity and in the reported circuit latency but this is a highly unstable system. This is not fibre and it's not even mobile telephony, it's not even 5G; it's much less stable than that. And what that means is that TCP, which assumes wires, is going to struggle, it is really going to struggle to actually get performance out of this.

Not only is the latency and capacity varying, but the loss is also interesting. This is a ping every second for one hour, and you notice across that one‑hour period, around about 4,000 times I lose one packet, just lost it. It's not a burst of loss; it's just micro drops ‑‑ one loss. So you are getting both micro drops and a very unstable jitter. And that's kind of what the TCP protocols have to deal with.

Now, how does it work? Well, we commonly use, or probably you do, speed test, and here is speed test every three seconds for a couple of months on a service in California. And what it says is, most of the time Starlink reported a down speed of above 50, closer to 100 megabits per second. Occasionally if you are lucky, it will get over 200, and some of the outliers are up around 300. Which is pretty impressive. But is it true?

Well, you have got to actually match what TCP does against what Starlink speed test, because speed test does not do protocol throughput, it is doing circuit measurement based on packet pairing looking at the amount of dilation, and they get dilated by the bandwidth variations. TCP reacts differently, it is searching for long term stability. So the blue line it probes upwards to a point where it gets packet loss, it takes the pressure off the network and starts doing it again. Reno, exactly the same behaviour, gently probing forward into loss, finds loss and backs off and starts again.

The other one which is a protocol that is emerging is bottleneck bandwidth estimates done by Google where it does a completely different algorithm. It tries to estimate the link capacity, the path capacity end‑to‑end and then sits with it. Except for one roundtrip time out of every eight, it probes upwards by 25%, putting stress on the buffers in the network. If there is no latency change whatsoever in that roundtrip time, it goes well, it didn't even touch the buffers, I can go faster.

So, it will then lift up its capacity and go faster. If, however, you are working at capacity and any extra traffic you push into the network is going to start filling queues then the latency will increase, and if in that one roundtrip every eight where you are look to go see if I increase the sending rate does my latency increase, if it does increase, you don't do anything; you stay at the rate you are on. And in this simulation here what you see is just gentle probing into the network but basically a much more stable behaviour.

That's theory. Let's look at practice.

Cubic, and again don't forget Cubic is not given a chance to do what it normally does. Nothing is stable, there's lots of loss, almost at random, and the available link capacity is kind of random, and the buffers are big. So, it's a sloppy system with a lot of noise. There's just one point where Cubic gives an almost classic signature for sort of second 17 to second 30 where it kind of ramps up, finds what it thinks is a stable speed which is around 45 megabits her second, not a lot by the way, and then starts to go up again and encounters loss, and loss, and continued loss, brings it down and down and down, till eventually near the end Cubic is around 10 megabits per second and literally struggling because those continual loss characteristics cause TCP to downgrade its estimates on what it can achieve stably because Cubic is searching for stability.

Or what about QUIC? Well, QUIC is just Cubic. Cubic is sitting in behind that encrypted UDP. Now, there's also BBR versions of QUIC but I will talk about BBR in a second. But, as you see, QUIC and Cubic behave much the same way as Cubic, the only difference in this implementation of Cubic is it started with a very high initial window size of ten packets, and because it knows it very quickly, the buffer size in these modems is huge, that initial window will look like it's viable and it will maintain a pretty high sending rate, filling up the buffers but the end result is catastrophic loss and that catastrophic loss will cause Cubic to rapidly drop down its congestion windows and put the brakes on very, very hard and so by about second, 35 to 36, I am down around 10, maybe 5 megabits per second and even then I am getting loss and its acceleration is much, much slower so it's still on a very stellar performance.

BBR, on the other hand, is vastly different, and the reason why is, of course, because loss doesn't matter for BBR, it's trying to estimate a sort of a stablish bottleneck bandwidth. And even if it varies a bit, it will keep on sending at that rate as much as it possibly can. So what you find is BBR is actually able to sustain up around sort of 130 to 200 megabits per second except in that round second 30 but does so even in the face of relatively high packet loss, BBR certainly works better in these slightly unstable highly noisy environments. Here is all three, the red is BBR and the 60 seconds BBR was able to kick a lot more traffic through that lean, whereas both Cubic and QUIC, which is just Cubic, both collapse pretty quickly, and in searching for a stable point found that the continual issues of loss and jitter just defeated it, and 10 to 20 to 30 megabits per second is about as much as you can get out of these systems using that protocol. So this is very much protocol‑related.

So, Starlink with its very high jitter rates and its high level of loss, because these are fast moving systems worth going over very quickly, very quickly, which means normal TCP just simply can't stabilise at the speed at which you want it to stabilise to make use of this, so the algorithms overreact, call back everything and make everything collapse into small capacity.

Now, if you are doing voice, not very high capacity. Zoom, even video streaming which is just buffer refresh, you actually might notice this. It will work just fine and it will look like say a DSL connection of somewhere between 20 and 50 megabits per second and it will work well. If you really wanted to use this to bulk data at very high speed, no, not so much. Bulk data transfer will not work with conventional flow control algorithms. Loss based. If you really want performance out of Starlink, you need to move over something that isn't loss sensitive and BBR is a good example of that that gets performance out of the system; it pushes it well up.

Now, I don't want to leave you with the wrong impression of Starlink. It's good, and if it's the only thing around for literally thousands of kilometres when you are in the western planes of Mongolia or somewhere in the middle of Australia in a desert somewhere, good luck to you, it will work and it's the only thing will work, exceptionally well. It'll do video, conferencing, short transactions, 10 to 20 megabits per second, and it will do it much, much better than an equivalent geostationary service, HF radio. It may not do it as well as 5G mobile, but when you are in the very remote areas there is no such thing as 5g mobile service so it's the only thing around, and it's pretty damn good. If you are in remote or in the middle of the ocean, this will work well and it's extremely good. So, for retail‑based access services, it's very hard to beat it in the remote rural application.

If you want trunk infrastructure to connect up isolated islands in the middle of the Pacific, maybe not so much. It's much harder to get a stable service out of it and to put a whole lot of multiplex sessions on top of it things are probably going to collapse, so it's not a good trunk infrastructure but high speed last mile, brilliant. And of course, in settings that are hostile it's very, very challenging to channel. Starlink has some incredible strengths. And the other part of it is, it is not expensive, for a satellite service these days it's certainly on a par with many terrestrial services, cheaper than most Australian mobile services so it's cost‑effective.

With that, hopefully I have come to the end of the time, I will certainly clamber up and be ready and willing to answer any questions you may have at the tend of this, thank you very much.

(Applause)

DMYTRO KOHMANYUK: Exciting and not much about DNS, the previous talk and this talk may lead to collaboration.

SPEAKER: I have a question regarding the mobile phones connecting to satellites via 5G. How do you make sure that your satellite service complies with all frequency regulations of all countries across the world to make sure that phones can work in all the countries across the world to connect to your installation? Thank you.

GEOFF HUSTON: Can you hear be firstly?

DMYTRO KOHMANYUK: Perfectly.

GEOFF HUSTON: The issue is no, it does not comply with all the countries all of the time. While it's outer space, and there are no laws above 160 kilometres, down here, to send and receive there are laws, and Starlink has to negotiate individually with each national jurisdiction when it gets a licence to operate. So it has licences in the United States, it has licences to operate in Australia, I believe in New Zealand, but you know, in other countries it's a country by country thing. It does use the standard frequency allocations for both 5G and for satellite, the KA and KU bands. But for individual countries, you know, realistically, it's a one‑on‑one negotiation as to when they can operate. So Australia signed up with Optus on 5G, the US has signed up with T‑Mobile, but, quite frankly, other countries, it's up to them. Don't forget for a hand‑held to work, you need an enormous transponder, there's one up there which is the size of a tennis court, it's not Starlink but astronomers are objecting to this floating tennis court in the sky, so I wouldn't hold your hopes up, I think it's more short SMS services that's about the only thing that's viable from that altitude. Thank you.

DMYTRO KOHMANYUK: I think they had the licence for the stationary Starlink terminal and these are different if I remember.

JIM REID: Freelance consultant. Great talk as always, you might be aware there's been discussion about this kind of stuff in the IETF setting and the arguments are being made that the use of these satellites changes the paradigm for communication because you have got a fast moving base station relative to a static observer on the ground and that transforms the way in which normal mobile communication works and the argument was being made by some people in the IETF setting, not necessarily in Working Group though, was that we needed new protocols for that and that TCP and IP and QUIC were no longer good enough to satisfy those kind of concerns because of the jitter and latency considerations that you mentioned earlier in your talk. So my question to you is, what is your perspective on that argument and do you think the solution will be something like we did with TCP back in 1990s when we had to first cover the use of long fat pipes?

GEOFF HUSTON: Yes, indeed, Jim, and I think you are absolutely right. We actually didn't alter TCP very much with the long fat pipes. To be perfectly frank, TCP just simply had so much status, that trying to make it work well over hundreds of gigabits of pipe capacity didn't really succeed. And indeed when the mobile revolution came along, TCP hardly adapted at all. It was actually left to the mobile operators to make their circuits look like wires because the TCP we use today, and in particular Cubic, it's almost a monoculture works over stable circuits and the one thing you find with spacecraft zooming over you is you are getting massive variations in signal to noise and massive variations in latency, almost in fractional second terms. TCP collapses. So while I am giving it at 200 megabits her second bearer, I am getting 10, 20 megabits per second of sustained throughput because TCP doesn't cut it. And it's not TCP, it's loss‑based congestion, and that model of loss‑based congestion works well if you have got a carrier, a wire, a bearer. If you haven't, and you are effectively in free space, you are on the wrong protocol. If you want a protocol to work you have got to move to a different paradigm. I'm an ardent fan of BBR, this stuff scavenges stuff out of bandwidths like nothing I have seen before, I can get one side of the earth to another without any kind of bandwidth reservation. It just literally elbows everybody else out of the way. It's a fantastic protocol. So my view; I don't know why we are not all running BBR, it really is just a totally different way of thinking about congestion and protocol behaviour and if you are not running it, you know, and you are wondering why your service is rubbish over long distance well you know that's the answer. So, I think BBR is definitely the coming protocol right now for this kind of multifaceted world.

DMYTRO KOHMANYUK: I wonder running between the satellites in the sky, that would be another issue.

SPEAKER: Harry Cross speaking on behalf of myself. Just an interesting question: Have you done much around what happens when a device moves between satellite while it's transmitting or mid‑flow? Because I can imagine that's going to be quite complicated.

GEOFF HUSTON: Well there's a switch every 15 seconds, so you will face raised antenna which doesn't move very much but it manages to refocus across angular distance, does that switch every 15 seconds. The reason why there is a 40 to 60 millisecond latency on Starlink when the actual device is only 3 .7 milliseconds away from you is that there is an enormous amount of buffering going on. And so when it does that switch, it has some milliseconds to actually have sort of a hiatus in connectivity as one half of the array locks into the new satellite and the other half is getting prepared for the next switch in another 15 seconds' time. So Starlink is continuously interrupting your circuit and flicking to another one. And it's not just you, the base station is doing the same trick because to keep your signal going it's flicking across from satellite to satellite to pick up your signal and the protocol between satellites, it's just IP, it's your protocol. Whatever you use for flow control is kind of whatever you are using which is why Cubic ‑‑

SPEAKER: BBR in space is a thing?

GEOFF HUSTON: Oh, BBR in space is a thing and bloody hell, it works brilliantly. Switch to BBR!

DMYTRO KOHMANYUK: That's a good discussion now.

SPEAKER: Blake Willis, Zayo. Thanks for this talk. It's really good, along with a plug to Mark Hanley's YouTube channel where he has various simulations of satellites. Do you feel like there's still any room in the connectivity space for high altitude pseudosat sites, this is like the the weather balloon or autonomous aircraft or this just blows it all out of the water?

GEOFF HUSTON: The stuff that's around the loon area is much more stable, has much better signal to noise and they are able to offer a using some very clever steering you can do that. The problem is twofold, you are not in outer space any more, you need national licensing, so you are head butting against all the mobile network carriers completely, they have an entrenched business model and you are the enemy, so you are going to get shot down one way or another with that. The other part of this is quite frankly, you have still good a problem with earth stations because the lower you get you ‑‑ with Starlink even before that in the laser links you needed an earth station in 1,000 kilometre grid which quite frankly was never going to work, that's why they had to ‑‑ to try and get this thing running across space.

For the high altitude folk, that's a probably squared, it's just a nightmare. So yeah, you can go up across Porta Rico after a missive hurricane and provide emergency service. Great, but that's a huge investment for small window of service, and once everything is back to normal I am sure the Portarican mobile company is going get out of my business, that's my business. So, I am just not sure. Google sold off Loon, as you are probably well aware, and I think there was a Facebook one and there are a few other experiments. They were interesting experiments but I am not sure if they have a commercial future. Starlink is kind of the first because of its low launch costs, to take this on in spades and do it the middle of the ocean anywhere and that's kind of unique.

In reference to Mark's work, I am not sure how many satellites you can relay through to get down again. I know they are working from Mongolia to Japan, I don't know how many spacecraft are there in the relay, you can't tell because it's all encrypted so yeah.

DMYTRO KOHMANYUK: Thanks. Any remote questions? None. Please be quick and we will close it and go on to our next presentation, thank you, Geoff.

SPEAKER: So the question I am wondering is perhaps it's too early to say that BBR is the best choice for space networking, perhaps also because the goal of control is not just to maximise throughput, it's also about fairness, and currently if we do the measurements where there is no competing traffic of course BBR will look the best option but when there's comparing traffic it is something of bandwidth from every other Starlink users so maybe we have to be a bit careful here?

GEOFF HUSTON: Spoken like a true network operator. As a user I don't care about anyone else, I am there with elbows going get out of the way and BBR suits me just fine. Network operators: How dare you be so greedy. It's the same old argument generation after generation. This protocol is too evolved and folk like me going you had a network and I am using it, what's your problem? I think that tension isn't going to go away any time soon. Thank you very much, although I can't be there in person it has been a pleasure at least talk in this virtual way, thank you.

DMYTRO KOHMANYUK: Thank you, Geoff.

(Applause)

My pleasure is to announce our next speaker, Christer Weinigel

CHRISTER WEINIGEL: I work for a company called Netnod in Sweden so I am going to talk a bit about time, securing time for IoT devices.

So, I probably do not have to tell you why accurate time is important but basically, many security critical protocols today they need accurate time, you have DNSSEC, TLS, both of them have sort of a validity, this certificate is not valid before a time or after a time. So to use TLS you really need time.

You also have applications that need time. For example if we are in a hotel and they lock the doors at 9:00 in the evening and turn on the ‑‑ open the doors again at 9:00 in the morning, if you can trick them into opening the doors at 6:00 in the morning that might be a bit of a problem.

How do you keep time then? Most ‑‑ or any device can keep time when they are turned on, powered on. You have an interrupt you can use to keep time. Devices, when they are turned off, might not do that as well. They might not even have a realtime clock hardware. Even if they have a realtime clock such as Raspberry Pi, that doesn't come with a battery, so to get a realtime clock that keeps power ‑‑ keeps the realtime clock powered while main power is off, well you need to buy a battery cable. So it's quite common that you don't have it working in realtime when the power is off. There's something called shipping mode where you have a device sitting on a shelf and the first time you start using it you pull a small tab to connect the battery to the device. So the first time you boot, no power and no time.

Since we are talking to network guys, basically we are talking about network connected devices so we have a pretty old protocol from 1984 I believe, NTP and everybody uses NTP today, network time protocol, it's an old and unencrypted and doesn't really have any security. There are varieties but it doesn't scale to Internet scales.

There is another protocol called NTS that I was involved with. It does add security to NTP and I would say it's a pretty good solution. It does have a bootstrapping problem though; so it depends on TLS, and TLS requires time so where do you start? Do you try to get time first and then do TLS or do you do TLS first and then you need time? Another problem with NTS it requires TLS which is heavyweight so for a small device it might not work that well or it might be a bit too heavy and if you have a fight between features and security well most people will choose features.

So, I want to talk about the possible solution called Roughtime. It's an IETF draft. It started out as a way to solve the bootstrapping problem, it wasn't really intended to replace NTP because it was only supposed to have a resolution of about 10 seconds so you can validate the certificate but you don't really want to use it for as your main clock. It has fairly low CPU usage and memory footprint so it's something you can run even on very small devices.

So Roughtime has evolved a bit since the beginnings, now I would say it's a pretty decent generic time protocol, a lot better accuracy than 10 seconds, it's down ‑‑ the time stamps are in micro seconds so you can get a good time as your network will allow. It is secure, it can run on a fairly resource‑constrained device and still involves the initial bootstrapping problem.

So, one problem with Roughtime right now is it's kind of stalled, there is a bit of fragmentation, there are four different incompatible versions of the draft and they don't speak to each other, and a lot of the people who are been involved with Roughtime haven't had time to work on it so RIPE has funded us to kick‑start Roughtime, tried to do something with Roughtime and get it working. So this is sort of a call to arms, please talk to me because we want to get more people involved. We'd like to ask everybody what do you need in timing protocol. Can you tell me your requirements so we can actually figure out if Roughtime is a good fit? Do we need to add something to the Roughtime or subtract anything from Roughtime? So basically build a requirements document. After that, update the draft to follow the requirements. When we have done that, well get the implementations into shape and try to make it into an RFC at the IETF so we make it a standard.

So, here are some resources if you want to take a look at this. I am going to talk a bit more about this at the IET working group session in an hour or so and go into a bit more details. My mail address is right there, please contact me if you are interested in this because I really want your help. Thank you. Any questions?

(Applause)

DMYTRO KOHMANYUK: That's amazing. Thank you.

ROBERT KISTELEKI: We actually hit this problem, not knowing the time on the Atlas hardware devices, they don't have batteries or anything like that. We had our own solution which was good enough for up to a second accuracy or so, which is good enough for the purposes of Atlas. But I would love to talk to you because this is interesting and we may just get on board.

CHRISTER WEINIGEL: I am going to be here all evening today, and at least Thursday and Friday.

DMYTRO KOHMANYUK: Thank you. Any remote questions? Okay. Great. So now we have remote speaker, Taras Heichenko from Ukraine and he will talk about the "citizenship" of resources.

TARAS HEICHENKO: I today present lightning talk, citizenship of resources.

This research was done when I participated in the execution of the work granted by RIPE NCC Community Projects Fund. My part consisted of placing marked allocations of resources that changed their country attribute. To complete the task I had to find appropriate sources and make a set of grids to get the result.

Let's start with sources.

Here is the list but the main source of research was extended delegated statistics. The main subject of research was as numbers. I looked at AP addresses and happy to say demands deeper research that takes more time, unfortunately it was a very limited resource and it was almost didn't change their citizenship.

So what has been done? I took the files from extended delegated statistic, dated 23rd February 2022, the last day in Ukraine without a war, and 1st November 2023, let's name it today. I selected from these files those numbers that had the attributes in the first file and attributes in the second, find these conditions. I had to exclude those numbers who change their citizenship due to the deployment of NWI‑10, unfortunately RIPE NCC doesn't provide the list of such resources. Do they have it at all, I wonder?

However, I got answer that the second stage of NWI‑10 deployment was made on November 30th, 2022. I compared files dated November 29th and December 1st. 33 of 54 numbers changed the country attribute during this day. It does not mean that there are no cases that we are working on but for now I exclude those numbers from research because they demand additional consideration, except one number of 33 that changed its attribute from RU to UA, changed attribute again. Four of 54 were absent in these files so I think that they were returned to RIPE NCC and then redelegated to Russian LIRs. Only 6 of these 54 AS numbers were mentioned on the RIPE NCC side in the transfer statistics. So I got 18 AS numbers to place on a map.

Then I collected addresses from the Whois service and tried to convert these addresses to longitude, latitude coordinates. It was one of the most challenging parts of the research because most AS numbers LIRs have Ukrainian addresses with country Russia and services that convert addresses to longitude and latitude go crazy when trying to give an answer to such requests. This data was obtained from the RIPE NCC. I know this data was provided by LIRs but something goes wrong if you have such a situation.

And here are the labels of these AS numbers located on the map with occupied territory of Ukraine. Nothing unexpected, but what do we have now?

So, initially the data was collected for community as repository to conduct their resource holder. Nobody cared about the country where the resource allocated but times have changed and NWI‑10 confirms it. The set of policies and the RIPE NCC itself did not adopt to the state of war in the part of service area. For example, how Ukraine a company can become Russian this is very strange but it is a usual thing, the RIPE NCC DB. Maybe we need to separate task force to consider this questions and possibly suggest some solutions to the community to make the data more clear and comprehensible.

My thanks to the people who initiated this work and helped to fulfil it.

Thank you for your attention.

DMYTRO KOHMANYUK: Thank you, I hope you hear us. I am glad you speak these results and thank RIPE for sponsoring. Remote question that Jan is going to read.

JAN ZORZ: Thank you. So we have a Q&A, we have a question, well, it's not a question, it's a statement from Oksana:
"First of all, I would like to say a huge thanks to Taras for his extremely important and important contribution ADAMANT and Ivan Pietukhov in person not only for hosting RIPE 87 Kyiv hub but also for being a partner in our project. Unfortunately we had an airalert in Kyiv and I can't go to the office, all bridges in Kyiv are closed, so I would kindly ask you to deliver my question to Taras and other participants of the meeting. What next steps regarding the citizenship of resources have to be prioritised? Oksana."

TARAS HEICHENKO: First of all, if you hear me?

DMYTRO KOHMANYUK: Yes, fine. Loud enough.

TARAS HEICHENKO: Thank you for the question. It depend, the next steps depends on what ‑‑ of our aims. I think that, first of all, we need the data more consistent and and we need to have better history of data to drag the history ‑‑ track the history, what was done and possibly what really happened during their last time to have their future, understand what was done, besides that, I think that consistent data is important not only for the Ukrainian data holders but for the whole RIPE community.

DMYTRO KOHMANYUK: I would agree. Any questions?

SPEAKER: It's my own question, so Tobias ‑‑ speaking for myself. I was wondering whether you looked at how the DNS changed for those prefixes which changed citizenship?

TARAS HEICHENKO: I do not ‑‑ did not consider DNS in this research. I only looked at AS numbers. As I said, I look a bit on IP addresses but IPv6 really did not change citizenship and IPv4, it is more complicated history to search out what happened there, it will take more time.

SPEAKER: Okay, I might be help with that, especially the DNS part.

TARAS HEICHENKO: Thank you.

DMYTRO KOHMANYUK: Go ahead. Any other questions, Jan?

JAN ZORZ: No.

DMYTRO KOHMANYUK: Thank you, I am happy you did this work and I am sorry what's going on in Ukraine for everybody that knows that and I think you should continue this research because I think we can improve database having processes and maybe that's something for the working groups but let's go on to our next step in the programme and don't leave, we have an important announcement at the end of the session, actually two of them, and the next talk is another remote presentation, if I'm not mistaken, Louise van der Peet from TU Delft. Oh, Meetecho, then, thank you. She isn't connected to Meetecho platform right now.

SPEAKER: She is saying she sent video but I don't know here.

DMYTRO KOHMANYUK: Maybe we can put the slides on the screen and go through them, it would be better than nothing. We hear you, maybe we can click through these slides.

LOUISE VAN DER PEET: I have also uploaded a prerecorded video but if you want I can tell about it now as well if that didn't work.

DMYTRO KOHMANYUK: Better if you do your presentation now live.

LOUISE VAN DER PEET: That's great.

I am from TU Delft in the Netherlands and today I am presenting a standard for increased transparency and privacy on the web called privacy.txt. While I was doing this research I have been getting increasingly frustrated about privacy and cookies on the web in general and I think most people understand this frustration when they say these type of cookie banners. So you can see two examples. In the first one you see that you can only accept any cookies you might get, might be tracker cookies, might invade your privacy, are you going to accept them? In the second example you see you have to unclick all of the cookies separately in order to not be tracked or anything.

So, both are quite tedious and annoying.

But there are actually laws for this in Europe so the first law was introduced in 2002, the ePrivacy Directive For Consent and it says that all but necessary data collection requires consent. And eventually this led mostly to the first type of banner where they just had to notify the user for saying okay, we are collecting your data. So then in 2018, GDPR was introduced which gives new requirements for consent, specifically that consent should be freely given, unambiguous, specific, informed and purpose limited. Nowadays we still see lots of consent banners that violate the GDPR. For example, both of these do. In the second case this cannot be considered unambiguous because you really have to do a lot to reject your consent while you can easily accept everything.

So, I'm not the first researcher that looked into this. There are many problems with cookies and trackers on the web. So 90% of websites use tracking cookies, and even if you do not give your consent on a website, often this will still be ignored. So if you reject it, you will still get the cookies, and 80% of websites are not compliant under the GDPR currently.

So I made a solution for this. It's called privacy.txt and it's based on other dot text standards, and basically, we try to increase the transparency by reporting cookies and attributes, banner information and privacy and privacy policy location. Also, the attributes of privacy.txt is that it's server side and self closed, so the website owner will put the information themselves on the website. The form of this machine readable, so it's easy to create and also audit, and it it has a single reference point; it will always be put on for example website.com/privacy.txt and you can see an example of privacy.txt file. I put an arrow to which shows attributes, the cookie names, the cookie domain names, duration, whether it's first or third party, whether it's optional, ACP only and the secure value.

So, the advantages of privacy.txt is that it's machine readable, it's easy to adopt because the grammar is very simple, there's no change on the user side, only on the client side, it does not interfere with browser functionality, it is complete in most parts of GDPR compliance and also other compliance, laws from other countries, it has a potential for high accountability and it has transparency as a priority.

Furthermore, we also made supporting tools. So mainly two tools: the data collector tool and a cookie compare tool. So the data collector tool collects cookies from websites realtime and can automatically create privacy.txt files. The cookie compared to can then compare two of these files, or a file from the web or realtime cookies from the web in order for verification for privacy.txt files or for auditing purposes, which can actually be used large scale.

So, there's some challenges in the automatic cookie generation. When we collect from the web sometimes the data collector will be recognised as a Bot and it will not have access to the website and therefore cannot collect cookies. Sometimes cookies require user interaction to be activated. So, for example, the scrolling or the clicking of a YouTube video activates a cookie and a data collector tool does not do this, so then not all cookies should be recorded. Furthermore we make a bannerr detection tool that detects the "accept all" button of the banner in order to record all cookies and this banner detection tool has been 80% accuracy, so it is possible not all cookies are collected when the "accept" button is not clicked. Sometimes cookie attributes change quite randomly, so in this image you can see a session tracker cookie which in the first part has the secure failure to true, the second part the secure value is false and it happens within seconds without user interaction, these changing interaction might causerers to be in the privacy.txt files which you cannot do a lot about. .now on to the real world impact, the solution could empower users by providing more privacy transparency so users can see in one place what's going on with their cookies and data.

It can help websites comply to the GDPR or other privacy laws by easily verifying the privacy.txt file and also being able to eventually see whether what they are doing is compliant, and third, it can facilitate auditing so it allows for large scale auditing of cookies and privacy features on the web.

So to sum up:

Currently the GDPR compliance on the web is low, and we need more privacy and transparency for users. Privacy.txt is proposed as a transparency, along with auditing and generation tools to make the use a lot easier.

So, thank you for listening, everyone. And let me know if you have questions.

(Applause)

SHANE KERR: Thank you, this is an interesting idea, so, I think this could be useful and it's definitely better than the situation that we have today. My question, though, is: Is the objective to make this something that's required by regulation or is the expectation being that we would ‑‑ websites would kind of opt‑in to using this technology? And the reason I ask is because my suspicion is that web providers are happy to provide a bad user experience to make us hate the GDPR instead of hating them for not protecting our privacy. So, if that's true, then they are not going to adopt this technology voluntarily; they are going to have to be dragged kicking and screaming by Brussels. What's the plan, I guess, is the question?

LOUISE VAN DER PEET: So it would be great if it could be part of a regulation that this is just mandatory for websites but I don't see that happening in the near feature. So instead, what I think is that businesses do have an incentive to use this standard, maybe it doesn't count for big tech but small to medium size enterprises they spend a lot of money on this and eventually it will also help them to have a solution where they can easily verify their own compliance.

So, in that case there might be some adoption from those types of businesses and hopefully others will follow.

SHANE KERR: I share your hope. Thank you.

DMYTRO KOHMANYUK: Any other questions, remote, locals? I don't see any visible speakers. We have two announcements, Brian will make the second one and I make the first one.

There will be a social tonight and it will be starting not when it's printed on your badge but later at 9:00. There will be some buses departing from Sheraton 1 and 2. There is a printed announcements stating when the buses are going and returning, and some of these stop by the Ibis Hotel on the other side of the road. Off to Brian for his other special and more important announcement.

BRIAN NISBET: How to get home is pretty important. My ongoing and desperate need to speak in every single session today.

We would normally announce this at the beginning of the 4:00 plenary, but as we don't have one for lots of good reasons we will announce it now. Unless someone has literally sent us an e‑mail in the next 60 seconds, we have nine candidates for the PC elections, and thank you all to the people who volunteered for that, for two seats indeed, and the voting will be opening at 16:00 and it should be on the front page of the RIPE NCC 67 website and we are going to send you all an e‑mail as well just to make sure, because we know you are all spending this meeting reading your e‑mail. But just to say that the nine people who havel volunteered, and thank you to all, we have Valerie Arrora, Alexander Azimov, Chris Buckridge, Harry Cross, Sander Kamall, Dmitry Kraneuk, Hannah Creighton, Franziska Lichtblau and Kevin Meynell. There will be bios of all of these people on the RIPE 67 website and you can go and vote and all the rest there. Thank you very much.

DMYTRO KOHMANYUK: Okay. One more thing. It's NRO NC vote, you should have received that separate special e‑mail about that and NRO Number Council that's important thing, these three candidates and you can vote for them, some people can but I guess that's a minority. Check your e‑mails please and see you later.

LIVE CAPTIONING BY AOIFE DOWNES, RPR
DUBLIN, IRELAND