RIPE 87

Archives

Routing Working Group

30th November 2023

At 2 p.m:

IGNAS BAGDONAS: Good afternoon, welcome to the Routing Working Group. And Paul Hoogsteder and Ben Cox is joining us remotely. Welcome to the session and let's start with it.

What's different this time? We have presentations, we have questions, we have discussions and we have more time, that means hopefully we will try not to overrun at this time as Routing Working Group has been really known for having to cut the mics. This time, we have sufficient time, please use that, please discuss and raise questions to the presentations.

PAUL HOOGSTEDER: Do please raise your questions using the Q&A function not the chat.

We have got three presentations after this introduction talk, does anyone want to add something to the agenda?

IGNAS BAGDONAS: Last call. That means no.

PAUL HOOGSTEDER: Do you want to add something?

IGNAS BAGDONAS: No, thank you, agenda is over there and we will go through those three presentations and then we will, running a little bit forward, we will have this part. Basically open questions, what you would like to see happening with the Routing Working Group, well there will be some introduction for that, but please think about that. We need to ‑‑ we meet twice a year and see some presentations and we have discussions. What else Routing Working Group is doing, what is supposed to, what it is chartered to do, what are the deliverables and do you remember when was the last document published from the Routing Working Group? You might want to look that up.

So, open questions. What to do with Routing Working Group, did we continue like that, do we need to change something? Maybe we are ‑‑ Working Group, maybe leave everything as it is. It's an open discussion, open microphone, after the sessions at closer to the end. Right now, please think what you would like to discuss. Most of that will continue later at the mailing list but this is just to initiate, high bandwidth discussion here.

Going back on our agenda.

PAUL HOOGSTEDER: Approval of the minutes from the previous meeting, does anyone have any comments about these? Going once, going twice. Then, they are approved, thank you.

I wish to bring up the subject of co‑chair election, like many other Working Groups have done, especially during this meeting. There are a few things which we need to keep in mind, like do we want to have set terms for co‑chairs or do we just once every few years ask the Working Group are there any volunteers for the positions and of course sometimes Chairs leave and we need to select someone, a bit sooner.

I wish to remind you that the Chairs are always selected at the meeting; however, discussion about the co‑share election process, we can do on the mailing list and I think I will send out a proposal shortly after this week on the mailing list, giving some of my ideas and my co‑chairs of course about how to go forward with this and you can leave your comments and descriptions on the mailing list.

I think we can go into our next presentation.

IGNAS BAGDONAS: Yes. So with this, we start with the ‑‑

MIRJAM KUEHNE: I am sorry, you don't want any comment ‑‑ not about the specifics of the co‑chair ‑‑

PAUL HOOGSTEDER: I don't think we want to have a discussion right here; we would rather have that on the mailing list.

MIRJAM KUEHNE: Okay. I will comment on the mailing list, that's okay.

IGNAS BAGDONAS: Mirjam, that last part of the session, open mic, please use it for that, that's one of the questions to do with the Routing Working Group. Right now, we will try to stick to the agenda and the open discussion, which will eventually ‑‑ well, we wish it overflows into the mailing list.

MIRJAM KUEHNE: I will comment on the mailing list on the co‑chair selection procedures, right.

IGNAS BAGDONAS: With that, let's start with our agenda, and the first presentation is on 6G and and AI and other interesting and fascinating topics, one presenter is physically here and the other will be remote and both physical and virtual stage yours and please take it from there.

MARCO PAESANI: Ladies and gentlemen, good afternoon. It's very good pleasure and thank you for the Working Group to choose my presentation and with me there is a colleague Pietro Casara and we try to introduce a new feature about routing because is involved every day. And in particular we try to discuss about the 6G and NTN and network and opportunity about this new routing to ex‑employer SRv6 and artificial intelligence.

Outline:

Our agenda is composing two parts, the first part is by my colleague, Pietro Cassara and the second part is by myself.

I pass the microphone to him. Are you ready?

PIETRO CASSARA: Marco.

MARCO PAESANI: I hear you. Okay, I can hear you, you can? Just a moment. Obviously the network ‑‑

PIETRO CASSARA: Can you hear me?

MARCO PAESANI: Yes, I hear you.

PIETRO CASSARA: Perfect, okay. I can switch the slides on myself or I need your help?

MARCO PAESANI: You can switch by yourself.

PIETRO CASSARA: Okay, perfect. So, thank you for this opportunity, it is a pleasure. I am so sorry that I can't be there today but there was a problem with the public transportation today so again, sorry. I am a researcher in national council of research in Italy and now, I am working on the possibility to integrate the national network in the 5G basic infrastructure to move toward the 6G era. And today, I just want to introduce briefly what means for ‑ technology and how routing ‑‑ can help to perform these integration pretty well.

So just a few notices about the NTN network was a technology born in the late 1980, so it is a long time that this is available for the telecommunication market, with geosatellite but recently was involved forward constellation and using the URV, so but also high altitude platform. This kind of infrastructure can help recently to extend the functionalities of the terrestrial network, what means extend?

Extend means that we can help satellite with high altitude to form ‑‑ to have bay station useful to duplicate or extend the coverage functionalities of the terrestrial network so we can increase the bandwidth, we can improve the reliability and we cannot double but we can multiply the point where the network can be accessed by the users.

So for all the new communication scenario defined by the 5G protocol, this is a very grate opportunity. I mean for the machine to machine application, for the wide band application, especially for the ‑‑

Okay. So, this is classical classification that is available to describe the opportunity of a communication where the non‑terrestrial platform can be involved during the communication services.

So, we have two level of classification, by application scenario that means ‑‑ rural, remote and isolated but also a classification that depends from the platform that can be used. So we can find in the range of some 100 metres from the ground usually high altitude as a whole and moving forward in the space we can find the satellite platform that can help in ‑‑ in any way, base station as I told you before of the access network from the ground.

Okay. Also, the European standard for communication has started in the last five or six ‑‑ very great effort to standardise this kind of technology, is in fact they have released the many releases for the 5G protocol that the object is to define the guideline to integrate the NTN platform with the 5G network. So this is just a brief representation of this effort. So we can see for example in the column in the left that are involved many group ‑‑ Working Group of the ETSI, for the service application, for the transportation Working Group.

Okay. So, very briefly, the components that characterise the non‑terrestrial architecture are the service link, that is the link usually used to connect, the user implemented toward the first access point in the non‑terrestrial network that can be a satellite UV or high altitude platform. There, we can find the inter satellite link that these are usually to connect the satellite that are involved in the constellation and we have the fitted link that these ‑‑ the back link towards the ground where we can find the gateway, the non‑terrestrial gateway that is usually the first point that is connected in the case of the 5G basic infrastructure ‑‑ network and then forward the data ‑‑ network.

So, starting from this very simple idea, the interesting user that we can have with the non‑terrestrial network is the possibility to have a multi connectivity scenario so extend the idea of multi connectivity scenario for the non‑terrestrial network. Now we have the multi connectivity defined the 5G protocol, that means usually we can at the same time to be connected as UE, at the time to the new radio but also to the ENOG B, for example LT for the ENOG B is the technology. In this case, we can also extend this multi connectivities in the sense from the ground we can have also our access point toward the non‑terrestrial network point. Note that this is also possible because for the 5G protocol it is possible to access the network from the UE perspective, using a non‑5G or new radio technology; for example, the Wi‑Fi, the Ethernet.

Another characteristic, another feature that we can find in the NTN infrastructure is the possibility that with a single satellite we can have the three different kind of technologies used for cover or on the ground in the sense of the connectivities. In the case ‑ on the left we can have a satellite that is able with the multibeam technology illuminate at the same time different regions of the face ‑‑ on the ground. Note that from the perspective of the 5G, for example, protocol each area can be seen as a single cell, like in the new radio technology. Obviously the most used technology is to have one satellite for a single cell especially when the we have non‑terrestrial network based on the ‑ satellites and given we have LEO satellite technology is possible to have coverage based on different technology. The point of this slide is while the fact that when we have a user that is immobility across the cell that are covered by single satellite or multi satellite we have a problem to manage the handover so this means that if we want to guarantee the connectivity using for example ‑‑ based on Leo satellite where the satellite move reciprocally among them and the user keep them in ‑‑ on the ground, we need to perform such kind of ‑‑ continuously and this is not good for the throughput and for the latencies of the communication links.

So, it is mandatory in this case to find a solution to avoid this kind of problem and this is actually the key point that today I want to highlight here; accordingly, to the fact that the weekend ‑‑ we propose protocol stack in infrastructure where routing service is for example based on IPv6 and then using SRv6 can be involved for addressing this kind of problem, to guarantee that continuities of connections for network avoiding continuously the end of ‑‑ that is usually performed at the radio level in the 5G protocol.

Okay. What you can see here is like a representation of the normal interfaces used by the 5G protocol that is 1 and 2 interface... used to guarantee the mobility and the service management of the user when the first time access the network or when moving monk cell that core network needed to guarantee the connectivity to the ‑‑ for the user.

We have two different kind of architecture defined, the one for the control plane and one for the user plane, you can see the control plane in above of the figure and ‑‑ sorry, in the upper side of the figure and below the first figure you can see the user plane architecture. Because in the 5G protocol ‑‑

MARCO PAESANI: You have five minutes more.

PIETRO CASSARA: I can move on, sorry. The idea is okay, and the satellite with interface useful to guarantee the radio connection amongst satellite toward the ground according to the service link and the link and the ‑‑ among the satellite that are the inter satellite link are guaranteed by the service routing ‑‑ the segmentation routing protocol because it's interesting the features of this protocol to manage the path according to each segment constructed during the forwarding procedure.

I can move on because this is only another feature that is interesting for the ‑‑ for using the SRv6 protocol, according to the definition of the interface and 5G protocol is that this kind of splitting can help the organisation of infrastructure because help has to move on the satellite on the... instead, logical part that means who generate the path according to the SRv6 process, can be managed in the NTN gateway, that means in the ground because for examle the core network goes to the satellite operator so another stuff is very important for applying the SRv6 protocol is the possibility to use tools like IF IT tools that helps to guarantee in bandwidth metring in the status of the links of the network so we can measure and generate policy for the forwarding path according to the measurement that we can help and naturally the SRv6 can guarantee this kind of application.

Obviously, if we have this kind of measurements available for the network and accordingly to host the features of the SRv6 protocol for the programming network we can also think to arrange a management infrastructure that these automised accordingly to the state of the network so this is another result to use SRv6 and the last two interesting applications that these now investigate, in terms of research application by my groups is the possibility to have the digital twin network to have a model of the network to have provision of the status of the network and then we can break out to locate the resources to optimise latency, the throughput and so on. Exploiting the figures of the SRv6, the possibility to have the telemetering, and finally, this is ‑‑ we have a completed similar way test beds about the possibility to have a tool based on the algorithm ‑‑ to have where. QS is managed according to the QS management of the 5G involving SMS, UPF and so on.

So, now this is ‑‑ I hope, that this is clear why we are investigating, we spend effort try to propose this kind of architecture, but for the aspect related to the SRv6 I left the floor to Marco that for sure is more prepared than me for this kind of protocol. Thank you for the attention and for the opportunity again and if you have some questions, I don't know if at the end we have time for the ‑‑ to answer.

MARCO PAESANI: Thank you, Pietro. Now I discuss with you about SRv6, how many people know SRv6? It's a lot, fantastic, I am very, very happy about this.

What is SRv6? SRv6 is the future, SRv6 is the network will be intent‑based network because is the ‑‑ a solution for to have a connection and ‑‑ any to any, and he can have also have dedicated network like slicing. You have no one network but you have a lot of networks because it's slicing, you can do. All these networks can do drive by intelligence, for this reason on the title there is artificial intelligence and he was talking about telemetry.

SRv6 is a standard again and RFC8754 from 2020 talking about the new segment routing header. There is a standard and also support from many vendors, below the slide you can see the mainstream vendors, I don't present nothing because you already know, and many vendors working on this area and also this year at EA MTC because intra‑operability test between the vendor.

What is the difference between SRv6 and the other protocol? The main difference is in the header of the packet because in the header you will find so different hope that the packet coming through your network. This is the measure important feature that SRv6 have. The other one is always in the header, you can find the type and length service, also say TLV. Because on the single packet you can decide which type, which length, which service, you can do on your network.

The main difference between the protocol, the other protocol and SRv6 is in this slide, is very simple, very, very fast and you can use a controller to do this in realtime, packet by packet. Packet by packet, I repeat again because it's not clear, with many people don't consider it that how our network is packet by packet, and SRv6 can do it and also is on layer P, there is no other encapsulation over and the service configuration is very simple. Very, very simple.

You don't care about the topology of your network, finally, how many times you spend to describe the topology of the network and how many times you invest time to time to set up the IGP, the BGP, the other protocols and nobody ‑‑ and the network don't growing, is not fast, it's not routing, and the recovery time is up to 50 milliseconds, never imagine this with other protocol. You cannot do with this with one ‑‑ and you turn your path in 50 milliseconds. Very, very impressive.

And also you can do more and more with the slicing on the single network. This is an example to deploy SRv6, SRv6 is transparent by the network, you can do very simple situation with overlay mode and you set up the start point and the end point and next you set up the core point or you can do step by step like other protocols, no problem. Easy evolution, very, very easy evolution.

This is the final slide. Why you need SRv6? Because this is the question. Why? There's a lot of reason, but I try to explain only in five reasons but it's enough. The future is obviously, is IPv6, all over the world. The people increasing, the device increasing, the topology increasing. What is the network, is unheard or you heard from Pietro Cassara is on the space, the destination is completely different and you must have very good protocol. We already know BGP, many people know BGP, but for me BGP is the best golden protocol because it is really used and you can do also BGP on SRv6, it is transport protocol SRv6. But is not only for me segment routing, it is super routing, you can do more and more about your functionality, for today, for tomorrow, don't forget about 5G and 6G, you have a lot of devices that you control every day about 10 million each kilometres squared. You cannot do with not IPv4, it's impossible, nobody does this.

The forwarding plan is only SRv6, as you can see before. No label, finally, no label.

Network scalability is independent from network elements. Is very, very powerful.

Easier. Only one protocol, IPv6, for today and tomorrow. That's enough. Less configuration and easy planning to deploy.

Load balancing is native, finally, finally, when you have double part over 3 parts or 12 parts nobody knows where going the traffic, with SRv6 you can do this. Also with multi ‑ with two different ‑‑ also distance 10 kilometres, no problem, you can do also per channel.

And finally, the AI is really AI because if I can control the policy and I can change the single packet next‑hop you can do whatever you want, for the slicings, for the different network, you have a new dimension of the network but is not only two but also invertical. When you wrote your network you take a paper and you design, tomorrow you must have three dimension because this is one network, well you can do more and more, one over another one.

The path is programming and application aware programming.

Thanks for your attention.

(Applause)

IGNAS BAGDONAS: Thank you, questions, comments.

PIETRO CASSARA: I want to add one thing, what we presented today is not only our research topic given that I presented myself as a researcher but I am, we share in the standardisation committee and we discussed Thursday and Monday about the possibility to make a standard, to make real this kind of technology. Sorry, again.

TOM HILL: Thank you for the clarification. Tom Hill, I work for British Telecom. I have a lot of concerns with some of the things you have said.

MARCO PAESANI: Welcome.

TOM HILL: SRv6 is not the future singular, it could be for some things I absolutely agree, there are use cases for it, what I am concerned about is the narrative is SRv6 is IPv6 and SRv6 requires IPv6 and vice versa. That's not true and we need to decouple deployments of SRv6 and the benefits therein from the deployments of IPv6 because it is harming deployments of IPv6. All of the things that you have reported are features of SRv6 and thanks to its brilliance, apparently, they are all things we can do in SR MPLS, you can ignore SRv6 only and RFC549 these are all supported and work. The overheads of SRv6 are concerning, the security problems that have been introduced with further standards or drafts for SRv6 and for forwarding techniques are seriously concerning so I would love if it could you give some more thought to the security considerations, give a bit more thought to a world where IPv6 is of course the most important protocol and is the one that we build with, but please, please, please, do not tell everyone that you have to use SRc6 if you want to do IPv6. Because it's not true. Thank you.

MARCO PAESANI: Thank you. Obviously, this is a new protocol, only three years, and I think is, for me, is an opportunity to grow. I don't see the other ‑‑ another protocol. I testing both protocol and some configuration you cannot do this. But the future, late, to me and also to you, what is the right answer? I am not sure. But is a good opportunity to grow very, very fast, is very easy. You must replace a lot of equipment I know because the software is very new, but there is a new opportunity to grow. There is a lot of limitation about label, you know a lot of limitation about the number of the equipment with the label, and PLS is very good protocol, it's normal, it's working every day, we know this.

TOM HILL: SR MPLS is an exceptionally good protocol, it is good as good as we can build as far as I am concerned, scale challenges with labelling space are significantly lower with SR MPLS so that's one thing reasonably true already but I think just to your point here, instead of ‑‑ your follow‑up was very much we are still learning, this is new, things are research still, we are still learning, that wasn't the content of your presentation to begin with; it was, this is the future and you must do it. That's how it came across. So I'm happy that we are still learning and still forging a path and trying to figure out what is the correct answer and I am more than happy to continue contributing to that. But yeah. Thank you.

IGNAS BAGDONAS: Tom, before you run away, it's basically a question then to you as a member of this community: What is happening here and in general I would say a miscommunication, there are different entities but see mostly the same aspects in very different ways, there's operational view of the world, academic view of the world and those worlds don't necessarily interconnect, so to say.

Do you see this, and by this I mean the Routing Working Group might be a place of continuing these discussions? This is related to this last part of what we are planning to have, well what Routing Working Group can do. Do you see this as, well, one of feedback saying that yeah nice presentation but it's a little bit misaligned with the reality or do you see that this can lead somewhere?

TOM HILL: I don't want to speak on behalf of the entire community but in my opinion.

IGNAS BAGDONAS: As a member of the community

TOM HILL: In my opinion there is work that needs to be done and awareness raised, these matters are really important to operators, something we say a lot there are not enough operators in the IETF and a lot of things being developed there and standards discussed and debated, there is definitely a place for operator community such as RIPE to be able to expand on that, consider the consequences, the security considerations, it is important we talk about it. But we ‑‑ yeah, messaging is also important.

SPEAKER: Juniper networks. Just one question, quick one. First of all, did you consider all use cases such as multicast? Because when routing MPLS was introduced and you know that very well, it did not and still does not directly support multicast. So that was a drawback, leading to many people who were running MPLS networks to still keep in place the good old LDP because it can do point to multipoint. So that's my first question, did you consider multicast? And then I have one short one.

MARCO PAESANI: I represent only myself and probably you are bigger than me. This is my personal opinion, I don't know about the multicast for the future. I don't implement in multicast and on SRv6 and I do not have any experience. I have a lot of experience on SRv6 today, I implement SRv6 in a posit to other solution, not only MPLS, also many people need to not implement MPLS using other transport, also static transport no problem. I talk about what I do, not the future. But there is an intra‑operability test and the future I don't know, is only three years. I don't have a really answer to you, I am honest.

SPEAKER: It's not only three years it's quite longer because SRv6 started with Comcast and then evolved into a larger story so the protocol is kind of getting into its maturity.

My concern here is when I speak about use cases, I am not concerned about whether you are going to use MPLS or SRv6, if you are introducing something you need to take into account everything that is running today so multicast is a substantial part and we had yesterday a BoF about that, substantial part of community and not only community but the means of transporting information over the network from one to many. So I think that one will deserve to be mimicked in the future by whatever mechanism that is.

And the second question I wanted to ask is ‑‑

IGNAS BAGDONAS: One minute. Please ask but that's one minute.

SPEAKER: Sure. Did you consider migration scenarios from the current implementations towards the new architecture?

MARCO PAESANI: I studied the situation about, not ‑‑ with network, with Pietro, this is the presentation and I study only for the local area, for the local provider here in Italy ‑‑ these two providers in Italy don't use any more other transport layer, not only SRv6, is enough.

SPEAKER: Thank you.

IGNAS BAGDONAS: So, thanks for your views, thanks for your presentation and please continue this discussion that you started here afterwards, it appears that there's a certain level of, say, miscommunication or misalignment in the views that could be rectified.

Right. Next topic, Ties de Kock giving an update on repositories.

TIES DE KOCK: I want to give a quick update or short deep dive about RPKI repositories, this is a detailed RPKI topic about how we made something for resilient for you the community but something that may be quite gently relevant for this Working Group.

Let me figure out this clicker thing. But first, I want to do a quick talk to action about our user experience research that we are doing. As you may have heard we are looking at UX of all ‑‑ a lot of projects at the RIPE NCC and my colleague Antoinella has finished initial user study about the experience of the current dashboard and have a new prototype, a mock‑up for user experience research. During this week she has done 14 usability tests, but more participants are very welcome,if you want to participate this week, drop by at the coffee break room or remotely, shoot her an e‑mail because we want to make something that's not drastically different but easier for you all to use.

Now, let's continue to my actual content. The deeper dive in the RPKI repositories and I think you all know RPKI probably, I think it's good /TPOF another short introduction and here is a sheet I took from my colleagues in the learning and development team. All have RPKI Trust Anchor and we know those trust anchors users create signed RPKI objects. Those objects are then retrieved by applying ‑‑ software, RPKI validators, they validate these objects and the validated payloads is routing policy is then injected into routers.

The downloads these over r‑Sync or RRDP protocol and today we will look into those repositories.

Before diving into those I want to introduce a slightly different view of RPKI, because I received two sides of the system, you have the sides that use operators consume where it's about creating objects that affects routing and ROAs and that side, but I also see the other side of being a CA operator about software, repositories, availability, certificates, and have a slightly different perspective on that.

So, to me, the RPKI system is also a hierarchy of certificates and in repositories and if you look at how this is structured at the RIPE NCC we have a number of internal CAs, some we sign off‑line and some online and the members have their own certificate of authority below that, you can pick to have hosted CA where we manage the private key and you can create the objects and those objects end up in the repositories that we are talking about today, so we have to delegate an RPKI which is a whole new topic ‑‑ whole different topic, sorry.

If you look at this RPKI thing it has grown over the last years and by now you have quite a few objects. Without looking ‑‑ I can check it there. Around 95,000 certs, in our repositories and you can transfer that either over r‑Sync or RRDP so it's effectively it's a collection of a lot of small files and use RRDP it downloads XML file or delta and if you use r‑Sync it ‑‑ recurs if you lists the directories and transfers what's different. And as you can probably see those numbers as well, r‑Sync is a bit more efficient when you transfer the files initially, for consecutive checks I am not sure it's more efficient, probably actually not, but what is really important is there for me, as the CA operator is much more efficient because it doesn't consume CPU time on the server, or not a lot of time when transferring files.

So, yeah, when I started off at the ‑‑ in this team at the RIPE NCC, we had a relatively simple set‑up, certificate authority software, published files to publication server and to NFS volume, the r‑Sync server was and publication server running outside the behind CDN. Set a few limitations, maintenance, because if you have only one publication server instance and you do maintenance, it's probably not available, and the other limitation also benefit because it's simple is r‑Sync is reading fast from NFS and one key thing to know there is you have these two transport protocols and software only downloads the one, in practice it prefers RRDP and rsync is mostly used when it has an issue if we break the RRDP repository slowly but there's a bit of jitter before it falls back, arrives at this r‑Sync so you have infrastructure that suddenly needs to have a lot of capacity.

While reading r‑Sync, we had to issue there that with NFS you cannot cache the metadata so when there's a fallback all these clients connect and they correct, lists recursively, we have this message directory with 22,000 files and a lot of stat operations on the file system and the capacity limitation was we had around 250 cases ‑‑ that's where our storage peaked.

So, we have tried to make this more of a resilient by making it more complex I look at this new diagram and we have tried to evolve this based on operational needs and to evade failure modes we have realised. So, first let's continue with r‑Sync because we were just talking about that and never thought I would think this much about r‑sync in my life to be honest. Where we now have a set‑up that doesn't reach from FFS we have a daemon that reads content from one of our publication servers with writes that content to the disc and saves from a local file system. This means that the VM because reading from a local disc use cached AL and suddenly limitations are no longer stat calls but CPU limited and what is more important than r‑Sync is RRDP because that's the primary resource preferred by RIPE by these relying parties. Therefore, RRDP we have multiple deployments, we have one, we have two deployments in our own network and one outside it and two CDNs in front of this, this origin, that we alternate between on a weekly basis, so the fallback is a regular activity and when we see any glitch in this RDP infrastructure in our monitoring which is quite extensive by now we just switch over because the switch is a safe process we can just do and clicking a button to change the C name is much faster than any type of debugging you can do because in the end you still try to switch over anyway, why not switch over in the beginning and debug afterwards?

With this set‑up we think we have covered quite a few failure modes that we have seen and we have introduced quite a bit of complexity, I have to show it here, and the most important what we see is also the downside of using CDN because all the CDNs have tiny glitches and in that case you are really happy that you can flick that switch there. On the other hand we also have this publication server outside our network where we can point the CDNs to and that's for example that something that helped us recently when there was public network glitch somewhere in Amsterdam.

Now, a little bit more on r‑Sync because you may have realised we have these processes and they syncronise from this server but you have multiple instances syncronising from one source, why does that work? Actually does a kind of simple realisation there and that's it takes a while to run ‑‑ to copy all these files and to process these with validator so, if you want in a realistic setting to see all their content you need to first connect to one machine, transfer the files, process them with the validator, connect again and then connect to a machine that's behind so in practice because this takes at 90 seconds to run this at least, and then you have quite a bit of time for this to syncronise so in practice, you have quite nice mechanism that causes files to move forward in time and for relying parties to like the new content that they see.

And for ‑‑ furthermore for r‑Sync moving backwards in time is kind of ambiguous because of how processes manifest works, this is a nice active discussion which will probably blog up on some IETF context but as operators you probably just should know that it works and think we have made it better.

I hope a tiny bit interesting to you all and I hope anybody has some questions.

IGNAS BAGDONAS: Discussion time. Anyone?

(Applause)

Right, then what, thank you for your presentation and we move to the next one, update on SCION,

NICOLA RUSTIGNOLI: Thank you, and thank you to Working Group for the time, in the SCION Association and today I would like to give you some updates about what has happened in SCION in the last couple of years.

Before we get started actually how many of you have heard about SCION before. I see some hands raised. That's great. So my presentation will first give a very quick overview what SCION is, just for the ones that are not familiar with it. And then I will focus on deployments and on Internet drafts that we have been working on.

So, all right. So what is SCION? Well, SCION is a path‑aware inter‑domain architecture, so before we have seen SRv6 that is bringing path awareness inside single administrative domain, SCION is path aware tries to do this inter‑domain, when it comes to routing traffic from one autonomous system to the other. So that means that SCION enables use cases as for example inter‑domain multipath enabling performance‑based routing, it's something ‑‑ if something goes wrong they can failover paths around so you can do this quite quickly and that is not only if something fails at close to the end point but also in the middle of the network. And so that's what makes it different from other multipath approaches as for example multipath TCP.

This is all enabled by end point path control so that means that in SCION an end‑point is doing path lookup, it chooses a path with a destination based on a number of path characteristics and then it sends over this packet that contains this path information to the destination.

And I think this property is interesting if you consider the security aspects of SCION. So these paths are all authenticated doing route discovery and there is path authorisation in the data plane when these packets are going to the wire, so that gives the sender stronger guarantees that the packet is going to traverse the intended path and it also enables use cases as for example geofencing. So making sure that traffic stays within a certain area of the network.

And because of that properties, the early adopters that we got with SCION are for example in the area of finance, domain use case is ecosystem of entities that need to communicate between each other, in let's say reliable way and that care about where this data is going so that's why finance, energy and so on, these kind of bit more critical workloads.

This is all enabled by very specific trust model, and this is given by the fact that SCION splits the overall network in what we call isolation domains so in this figure you can see the big are isolation domains and each one of the nodes is actually an autonomous system and what is typical of this isolation domains is each one of them has its own root of trust so there's no omnipotent root of trust in other RPKIs but trust is locally scoped to each one of these trust bubbles.

And another interesting aspect of that is that roots of trust are not in the hand of a single entity but there is a voting process so that several entities multi‑laterally are in charge of the governance of these trust root certificates that then act as a root of trust in each isolation domain and this is used by routing to authenticate paths so I think this is an area is quite interesting.

So how do you actually deploy it? Well of course SCION is quite different from what we have today. SCION is meant to be inter‑domain so if you want to add SCION on top of an autonomous system you usually add some SCION border routers at the edge of the network. These routers have SCION links to neighbour SCION routers in other autonomous systems, but by design, SCION tries to reuse whatever you have inside the autonomous system so that's on the data plane, on the control plane, if you have OSPF or whatever then it runs let's say as an overlay inside the AS but it runs potentially native SCION when you go inter‑domain.

So how does that work with actual clients? Well, SCION has its own data plane so you can in principle add a SCION stack to an end host and this will natively talk SCION other hosts in the other network. In the productive deployments we have today there's IP‑based applications so this would not be so easy and there's why there is a SCION to IP gateway and what you see as SIG in this figure and that is doing conversion and encapsulation from SCION to IP. SCION to IP gateway can can be deployed inside a SCION's provider's network so that's career grade SIG so that's the last two use cases that you see here.

So what about productive deployments? Our biggest productive deployment we have today consists of the secure Swiss finance network, handles financial transactions between banks in Switzerland. So, it overall is connecting around 300 banks and it runs processes, applications, asks for example interbank clearing so when you pay money from one bank to the other, that handles around 200 billion francs per day in terms of payment and the network is being migrated to a SCION‑based network so that should be done by next September. The reason why SCION was interesting for the finance industry is first of all because of this multi‑lateral governance and also locally scoped governance, this isolation domain concept and indeed they have their own finance isolation domains, of course the reliability and the ability to do performance‑based routing was also interesting, the fact that data stays within this trusted part of the network was interested for this kind of vertical and ultimately this is a multi ISP solution so there's I think currently five different service providers that provide SCION connectivity for this ecosystem.

SCION is not only in use in the finance industry so we see first steps also outside of it, in Switzerland there's a healthcare network, this so‑called HIN Trust Circle that is ‑‑ been set up based on SCION to connect hospitals, clinics and so on. The power industry has started to look at SCION, I think here the path awareness properties are interesting when it comes to fast fail‑over, for example if the power goes out in a certain part of the country, there is also an initiative to build global SCION education network so here there is a few universities and NRENs that are involved, Europe, US and some of them in Asia in the slides you have pointers if you want to know more about that.

So, the ecosystem, there is still a bit of research that is involved and keeps investigating more the research aspects of SCION but we also have productive community, a few ISPs that are on it and this is growing. And so, I think community for such let's say different approach is very important, so last year we constituted the SCION Association and we have some of the deployers, early adopters as our members. We work in several areas, we take care of the Open Source implementation so you can find it on GitHub and we are trying to make it better. There is of course a commercial implementation from a vendor with all the bells and whistles that you need to run it in production. But it is important that we also develop the Open Source. We also work on community so we had a community event in Zurich, we also had quite a few people coming at the last IETF hackathon coming to work on the Open Source and the deployment guide and then having a clear specification is very important so we have been active at the IRTF and IETF through the last year‑and‑a‑half and so I'm very happy that today, if you want to know what SCION is and the specification you can just go and look it up on data tracker so there's draft that cover all the core components of SCION, control plane, data plane, PKI but plus a couple of other introduction drafts. That he is are not adopted so the work there is still ongoing and there is a long way to go, and on this first of all we really are learning a lot by interacting with the community so if you draft read the draft and have some concerns or some feedback then you are very welcome to let us know. We will need to figure out a place for this draft, SCION is a bit cross‑domain so there's PKI and data plane and so on so we will have to see what the next steps are in terms of draft adoption. We will continue also developing the Open Source implementation and I think in the long‑term it is also important that we will see more deployments also outside of Switzerland, it is also important that as we go through perhaps an IETF process that we evolve the protocol, the specification, we need to perhaps make it more interoperable, we need to maybe simplify it or change it so this would be definitely a long and interesting discussion.

I think with this, I'm wrapping up so here on the on the slides you see a few pointers, developer documentation, the research website and so on and I think with this, we can open up to questions. Thank you.

(Applause)

IGNAS BAGDONAS: Thank you, please, and we also have some questions in virtual queue.

TOM HILL: Hi there, thank you very much. Tom Hill from BT. Broadly speaking, I mean I have had my eye on SCION at IETF and work for a little while but broadly speaking I am still confused as to why we need to add this level of trust into routing protocols. I do believe we solved trust at a higher level of protocol, you know, already today and I don't particularly see why we need to back port that down the step. One thing that I was curious about and I am not sure if we fully covered this was, if you are a situation where you would like to run SCION, to my understanding you have to have all your border routers in an AS number running SCION, capable of performing SCION activities?

NICOLA RUSTIGNOLI: Concerning your second question about the routers, SCION can run some in parallel with your existing network in principle you can have one border router and a connection to some SCION neighbour, to run SCION you can run these to collect traffic from your customers whether they use the gateway or they use the native daemon. So you don't need to enable it on all of them.

TOM HILL: Yeah, I am going to have to try and understand how that doesn't compromise the trust. The next question is there a way to provide this over the top? Is there a way to provide SCION as a SCION over existing network that is non‑SCION compliant?

NICOLA RUSTIGNOLI: That is a great question, so intraoperability can be improved. So the way this is done today, for example, for remote workers in some of these networks is by running this SCION gateway inside the ISPs so traffic is collected let's say as close as possible to the source but that is limited to single autonomous system. There is ‑‑ there are some research proposals about getting less a SCION POP, this is experimental, like more than SCION.

TM HILL: Okay, thank you for your presentation.

IGNAS BAGDONAS: We have Peter Hessler joining us remotely.

PETER HESSLER: Hello, Peter Hessler, from the OpenBSD provisional. My question was around trust or more importantly, not trust. How would other examples ‑‑ in SCION are in similar... (lost connection) talking to other finance groups ‑‑

NICOLA RUSTIGNOLI: Would you mind repeating the question, I think you broke off.

PETER HESSLER: Okay. Is this more clear now?

NICOLA RUSTIGNOLI: Yes.

PETER HESSLER: Okay. So my question is about not trust, in all of your examples that you showed of deployments the groups necessarily trust each other. If SCION was to be used in a more general purpose network for example as ‑‑ not suggesting this will happen but as a replacement for the entire Internet for example, many groups don't trust each other. How would we solve this? Like one great example is a new political party may not be ‑‑ doesn't trust the current political party in power or there will be a group working on reforming police corruption which necessarily does not trust the police force. How would these isolation domains and trust relationships work in that situation?

NICOLA RUSTIGNOLI: That's a great question. So, to clarify this, let's say that for example you as an AS, police AS or something want to join a certain isolation domain, well in order to participate into routing in that isolation domain you need to obtain the certificates from a set of ASs that are in charge of governance and that's also ‑‑ that is something to do with this voting process. So, going back to your question, I think if there's ‑‑ there's portions that do not mutually trust each other they would perhaps end up in separate isolation domains and where communication is possible between the two that means whether a trust solution so accepting each other's root certificates possible or not. So, SCION in that sense tries to make this trust or this distrust a bit more explicit, but within each isolation domain you need to have this level of trust. If somebody disagrees in principle they could ‑‑ I mean the certificates could be revoked or they might not be able to join this IST and I think where we have had success is in this, a bit more contained networks as for finance.

IGNAS BAGDONAS: Thank you. Any other comments, questions, discussions? Last call. Well if not, then thank you.

NICOLA RUSTIGNOLI: Thanks a lot.

(Applause)

IGNAS BAGDONAS: We move to the last part of our meeting, the open microphone discussion. The Routing Working Group, what does you want to do, and in general, any comments and feed backs. By the way, have you looked up what was the last document that was published related to the Routing Working Group? And when was that? The microphone queues are open, please any comments and discussions and this is to start a longer term discussion which would overflow into the mailing list, please.

TOBIAS FIEBIG: Tobias Fiebig, MPI. Just as a little bit of an advertisement, I am currently working on an IETF draft updating BCP 194 so the BGP security considerations trying to make a more comprehensive document which does not recommend announcing IFX prefixes to the GRT so I would be very happy if people could take a look at the draft which can be found by Googling 4 BCP Sec update IDF and provide feedback and fill out things and in GitHub, things they like and don't like and things they want to see added and go away.

IGNAS BAGDONAS: Thank you. It seems we have Peter Hessler again in the queue. Peter.

PETER HESSLER: I think as far as the Working Group charter is concerned I am quite happy for the Routing Working Group not necessarily to publish documents. I think one of the big strengths of the Working Group is being able to ‑ information to operators and especially IETF protocols and discussions, new techniques, new ideas, new considerations. So, I am relatively happy with what the Working Group is doing currently.

IGNAS BAGDONAS: Thank you for the feedback. Any other comments? Right everyone is happy with the Routing Working Group. Well, we are certainly glad to hear that.

Then, Peter you had this Chair topic.

PAUL HOOGSTEDER: If we do want to discuss that further ‑‑

IGNAS BAGDONAS: Start the discussion. We still have time for discussion. So, it's kind of, funny situation I would say. When we have agenda really full we get complaints well, why don't you allocate more time for the discussions? So this time, we have more time for questions and comments and I am already kind of suspecting that we will get complaints that why we ended up meeting short.

PAUL HOOGSTEDER: One of the things we would like to discuss with you is when voting on certain ‑‑ for a Chair to be selected, the last time we use humming. The good thing about humming is, it's anonymous or relatively anonymous. The problem is, we don't have a strict count and it might not be clear if ‑‑ if one candidate is more preferred than the other because if you are sitting closer to the observers of the humming, and you hum a bit louder, then you might have more influence on the result than someone sitting in the back of the room or a place in the room where no observer is near.

Would you prefer some kind of electronic voting in that case? And what about remote participants; how do we count their votes? Should we count their votes? If anyone wants to come up to the mic and say ‑‑ share their thoughts.

TOM HILL: Tom Hill, BT. I was in the IPv6 Working Group earlier and they used Meetecho with a poll to elect a Working Group Chair. That worked really well.

PAUL HOOGSTEDER: That could certainly work, yeah. Do you need to be signed into the ‑‑ signed up for the meeting to vote?

TOM HILL: You do in that is case yes and it took me a while to find my Meetecho link because I didn't use it this week.

JEN LINKOVA: You can use lightweight client from the phone, but I guess people are supposed to sign up for the meeting, I think remote attendance is still free so people here can vote ‑‑ I think it would be very ‑‑ quite important to let people who are not in the room to vote because not everyone can be here for various reasons and we shouldn't discriminate based on who is physically present.

PAUL HOOGSTEDER: Thank you. One of the other questions of course is do you want us to have fixed terms, can chairs be re‑elected? What do you think about continuity? Because well, you might not want to get rid of an experience hard‑working Chair just because his term was over? Mirjam.

MIRJAM KUEHNE: RIPE Chair, I wanted to make a comment also about the previous topic about the humming. Different Working Groups use different things, just to go back a step, we used to do selection and better say selection don't want you to elect at the RIPE meeting but we used to do selections on the mailing lists but then it turned out people find it awkward to say I would rather ‑‑ I don't like this person or ‑‑ people don't want to say negative things about their colleagues on the lists so with Meetecho being ‑‑ having possibility to get people in the room who aren't actually present here like on remote platforms you actually started this trend last time with the humming in the room and we thought okay how can we include remote participants so I spoke to the Meetecho guys at the IETF and they have an electronic but they said our facilities here with the polls it's better, has more functionality and you can have multiple candidates and but this is not a requirement, you can do I it on the list but some Working Groups made good ‑‑ have good experience of the IPv6 did it this time and this time so that's definitely a probability and Meetecho works for that.

Just to remind people why we started going back to have selections at the meetings because we thought in the past it was unfair, doesn't include people online but now with Meetecho that works.

On the topic of term limits I do hear more and more community members actually ask for that or almost kind of demand that it's good to have across the board half‑term limits. That doesn't mean you as a Chair cannot also restand so your term ends after two or three years whatever how many working group Chairs there are and have an open call on regular basis so people can plan for that and I would like to prepare for being volunteer next time and then you have a list of candidates and then people can select them. I do think it's healthy for Working Group and I think our community likes us to have a certain rotation and to have term limits for Working Group Chairs, it's also healthy for you to take a break from time to time and get some new people in so I just wanted to have a strong suggestion on that one.

PAUL HOOGSTEDER: Thank you.

JEN LINKOVA: I think we need to clarify what we mean by term limit, you mean limit in number of terms or limited duration of the given term. I think we need to distinguish between that because it's one thing to limit it for two or three years and being able to stand again or saying no more than twice, than six years and you go away for at least one term so that's clarify this.

MIRJAM KUEHNE: I don't want to hog the microphone. That's true, some Working Groups have both and are really good experience with that DNS Working Group has done it, for many, many years they have term limits per Chair and then you can have two terms, you know so then they are out and maybe they at least have one term break and then maybe they can stand again next time. So some Working Groups have implemented and new ideas with that and new Chairs coming from time to time it doesn't mean it's totally new every year, maybe once every year, someone steps out and they can reapply once but not more. But yeah two different things, term limits and number of terms you would probably call them. It's for the Working Group to decide.

SHANE KERR: Shane Kerr, IBM. I was one of the people who caused the DNS Working Group to adopt these limits where we have three‑year terms and then maximum of two consecutive terms, I think it's really good, I think that we need to have new people in, it provides a gentle way for people to step down if they are losing enthusiasm, we shouldn't have a hard time finding qualified people who are interested in the topic and are able to do it, if you can't find any maybe the Working Group isn't as interesting as people think, I don't think in routing that's going to be a danger and I actually would prefer if all Working Groups had the same procedure and policy for stuff so I encourage every Working Group to consider just using the DNS process because it's the best.

DANIEL KARRENBERG: I am Daniel Karrenberg, one of the founders of RIPE, currently RIPE NCC staff member, speaking for myself. Actually speaking for Randy Bush because he has already had to leave.

What I observe is that we are also ‑‑ we are also putting out the appearance of not being really accessible to people who just walk in and might be enthusiastic. So, what I would encourage the Working Group Chairs to do and the members of the Working Group, is to look around them for people who might make good Working Group chairs and appear open to the ‑‑ to change and to see some more, I hate to say diversity and especially also in age and hair colour. Thank you.

IGNAS BAGDONAS: We have Peter Hessler once again in the queue, who wins the virtual prize for being the most active participant.

PETER HESSLER: Excellent, thank you, I would like to make a concrete proposal, that since the Routing Working Group has three Chairs that each Chair gets a term of three years, alternating so only one Chair will be up for election every year. I have no specific preference on if there's a term limit situation but I would suggest two or three terms if we do end up with a term limit and I think we can propose a raffle amongst the existing Chairs for who gets one year, who gets two years, three years, and then every round we call ‑‑ call for nominations and then just have the election voting process that way.

DANIEL KARRENBERG: Thank you for making a proposal, let me make a counter‑proposal. Make the numbers two and two, just to have a good churn. And that's ‑‑ no thanklessness towards the Working Group Chairs who really do a good job, it's more to encourage some turnover and some opportunities for new people to refresh. So, two‑year terms, a maximum of two.

PAUL HOOGSTEDER: But if do you two‑year terms, and you have got three Chairs, how is that going to work? Then there would be two at the same time in one year and one the next year? I think that's a bit too quick.

DANIEL KARRENBERG: Figure it out but I think sort of ‑‑ three years is ‑‑ feels long to me. Yeah. So, maybe elect two at a time, then. It's not impossible. So I said ‑‑ I said elect, I said the wrong ‑‑ Mirjam points out I said the word select, two at a time. So my point is, let's have more renewal. That's what I'm saying.

PAUL HOOGSTEDER: That's clear, thank you.

IGNAS BAGDONAS: One thing we don't need to find and finalise a solution right now, this is a start of initiation of the discussion. This needs to go to the mailing list and for the next meeting we would want to have something decided, both on the Chairs and on the role vision of what the Routing Working Group is supposed to do and what you would like Routing Working Group to do. So, yes we had a few other proposals, comments, that will be taken into account, but it's probably more than 100 of you here in the room. And we heard only a few of your opinions. So, please, please raise what you think about that and speak up.

PAUL HOOGSTEDER: We are running over time. So might better do that on the mailing list.

IGNAS BAGDONAS: I think we still have three minutes of our meeting time left. Please rate the sessions and provide feedback and we would like to hear on this format where we have more time for discussion compared to back to back presentations. If you think that is the right way to do, we might try to continue that and ask for three consecutive Routing Working Group slots for next meeting, to accommodate all the content. If you think that we need to squeeze back to the previous format, we might ask for one hour slot.

MIRJAM KUEHNE: Mirjam Kuehne, RIPE Chair. I wanted to make you aware there's also the possibility of interim sessions, some other Working Groups have done that as well, if you think you have too much ‑‑ too much content or if you have one specific topic you would like to get more feedback from the Working Group on, the RIPE NCC is happy to provide a Zoom link or whatever and help you set it up so that really is quite easy to do, other Working Groups felt the time between the meetings is a bit long sometimes if you want to get work done.

PAUL HOOGSTEDER: Noted. Thanks.

IGNAS BAGDONAS: Last call for any other comments remaining? And if not, and I don't see anyone coming to the mic, nobody in the virtual queue, we declare victory and you get one minute and 20 seconds of your time back. Thank you, everyone

(Applause)

PAUL HOOGSTEDER: See you all in Krakow, thanks.

LIVE CAPTIONING BY AOIFE DOWNES, RPR
DUBLIN, IRELAND