so from the world of video and audio we're now going into the world of cloud so this session is called cloud-based uh broadcasting and metadata and our first presentation our first team will present a reference architecture for implementing atsc 3 .0 workflows in the cloud facilitating Studio to transmitter link over public internet offering a practical framework for researchers and practitioners we have two presenters for this session Boris Koffman has over 15 years of experience in the audio visual industry specializes in broadcasting focusing on sound and image acquisition he's a senior specialist Solutions architect at AWS where he applies his experience in systems architecture video over IP and cloud-based Solutions he holds a master's degree in electrical engineering and Computing and a Bachelor of Science in telecommunications engineering from McKenzie University in Brazil our second presenter lmit CTO at en Anis Technologies where he's developing the company's technical strategy investigating new technical Market supporting standardization and patents and studying new systems and architecture so Boris and Richard the floor is yours thank you thank you Pete okay uh good morning everyone so we're going to discuss a little bit of the work we've been doing uh since last year about virtualizing a full weight tsc3 stack and also investigating a little bit how uh the Dual delivery of broadband and broadcast uh been delivered from the cloud uh into the same reception could be leverag for uh uh failover mechanisms uh for the RF uh uh delivery so the motivations are because atsc3 is already published it's implemented in four countries already it's rolling out in the US and recently Brazil adopted uh core Technologies of atsc3 uh for our next gen uh television U standard uh there are of course Solutions in the market already all of them are soft based so they uh technically could be run in the public Cloud except from the modulator and the amplifiers and antennas everything can be virtualized and run from the cloud and also to understand a little bit the technology implications because this change is already happening in the play outside so we've built and implemented a fully working system atss system and in order to uh evaluate those challenges we also uh open the possibility to leverage the same resources to deliver broadband and broadcast one more more specific objectives uh study how the STL the istitute to transmitter link transport protocol from atsc could be delivered from the public Cloud to the ground uh for doing that we we had to validate if the arq protocols like SRT wrist or or 6 could be uh the solution to protect that that stream very critical stream could not lose any packages uh and this also allows us to uh do tests uh regarding how the CDN and the stlp could be delivered from the crow in a synchronous and and seamless way so this was the reference architecture we we implemented uh from from the left you can see the live video coming in into the playout system which is already virtualized uh the program output of the playout system it's a transport stream is already feeding the dash encoder in your traditional linear overthe toop delivery so this Dash encoder is already delivering to a origin and this origin uh over the CDN delivered to your viewers traditional Broadband uh deliver so why not bring also the broadcast stack up to the cloud to write to optimize you know resources so the same Dash encoder that is creating the Manifest and segments are being used to feed the route server that is going to take those segments uh create the route sessions add a lot of tables that are required for atsc3 adds the electronic service guide add the data casting non-real time assets and then over multicast deliver that to the Gateway the atsc3 Gateway will add the tables modulation parameters all the uh bootstrapping required for the modulator and then we bring the stlp down using fiber but in our case public internets down to the exer modulator and then antenna so the viewer has the option to to receive that over the air or over the top yeah um hi everybody hi Pro explain um atsc3 decided to change a little bit how they deliver the video initially on the traditional broadcast system we rely on transport stream that's a traditional transport packet and we deliver it on one way to many on user what has been decided by atc3 it's to use the packetized way we are doing on broadband access meaning that we are not delivering a flow of packet but we are delivering files so we will rely on the dash mechanisms to put some uh some segment or chunk that will be delivered over the broadcast so we are no more in let's say RTP UDP transform stream over IP delivery but file delivery for atsc3 um to do uh file delivery over broadcast um atsc3 decided to use one specific protocol which is called root which is over there so if you want to deliver a file over broadcast you use root and the same file over Broadband will use the traditional HTTP protocol so it mean that as BR explained one dash segment that has been published by the encoding stuff or the packaging stuff can be delivered over broadcast using root and over Broadband using traditional HTTP that's important to understand that there is no more transport packet now in atsc 3.0 in top of this multicast stream that have generated over rout because rout is multicast delivery you have one route session per Service uh you need need to have some classical signaling the signaling is the way for the receiver to understand which services are available on your network and how they are composed so you have classically two main tables in the atsc3 protocol one is called SLT which is the list of the services you have on your on your RF channel that look like a p table for transport stream it give you the uh the list of services and this SLT is deliver over a uh Define an reserved IP multicast stream that's a fixed IP multicast that the receiver has to received first like the P ID Z in a transport Stream So the SLT is the entry point and then it give a link to all the different SLS one SLS per service which provide the architecture of each program how many video stream how many audio track either there any cross caption and so on so the SLS provide such kind of uh let's say Advanced information about the service and all of these service must be delivered as BR said from let's say the broadcast station to the transmitter site and to do that atsc has decided to make a tunneling so all the multicast througho session will go over one unique multicast stream which is called sttp Studio to transmitter link transport protocol that is one multicast stream from Studio to transmitter and it it's a really tunnel because it contain all the different Services all the different root session for the different Services plus additional information for the transmitter side typically the synchronization part there is a sfn in in ATS is3 for single frequency Network so you need to have a precise timing information that is provided by the ltp and all the modulation parameter that the transmitter has to also deliver so this is one tunnel one multicast stream which is very important because it contain all the content that has to be over the air and this is this sttp that will go from the cloud to the transmitter typ and this is this sltp that has to be delivered in a very relable method to ensure that no packet are missing from the the trans let's say the encoding multiplexing which is in the cloud and the U and the transmitter side so how this is a very critical string and it's a generic UDP stream uh it's not a impact GS uh we investigated which arq protocols would allow this uh transport and of course SRT and RIS was the the the the obvious choice because they support taking that multicast to DP convert to unicast deliver that to the public internet and reconstruct that multicast on the Exciter uh side so for those who are not familiar with arq protocols basically the sender keeps a buffer of the delivered packages while the receiver it's getting those packages and if let's say package two five and six for example weren delivered or arrived out of order so the receiver has time because there is a buffer to request over non acknowledged messages back to the sender and say hey look I didn't got two five and six please send me back then the transmitter will send it back six five and two transmitted and then the receiver is able to really put all those packages in order and deliver to the excite so that way we don't have any type of interruption in in the in the RF stages so just to recap this is the order that we that we implemented the whole thing on the broadcast uh delivery so we use Titan live uh encoder uh because it's already provide some uh templates on on service configuration supporting all the atsc3 codec audio video standards we deliver those Dash segments to the route server uh nnis media cast in that case adding all the tables required creating the LLS uh this becomes route sessions over multicast which are then delivered to the atsc3 Gateway which is in that case smartgate from Anis adding all the modulation parameters creating the plps and then delivering the sttp over multicast to a tool it's we basically compil the lib wrist and the lib SRT from from the public geub repositories like we created the interface just to to give some idea of the transport health and then on the on premise lab our Labs we reconstructed back the sttp and fit the the modulator analyzer so that screen shows on the left uh the actual sttp over IP being decoded uh by this transport analyzer and on the right you see the actual output of the modulator the RF signal that was re uh demodulated and presented so you can see that it actually works this is a picture of my laboratory those are the modulation parameters used in that test so around 32 meab per second streams uh this is uh Richard laboratory with the NIS modulator there uh same test and we run a bunch of different tests different configurations on arq and we're going to present some time series some plots about it but the idea is that all these hack of equipments were converted to only four ec2 instances uh in the cloud and a little bit more optimized in terms of compute you could apply the right compute quantity for the actual uh uh work that needs to be done instead of having like a big server just to run sometimes a simple signaling uh software so we brought just two uh plots here so the first test was the let's say the most complex one we instantiated this whole system at Oregon AWS region uh Us West 2 and we deliver over public internet back to Ann's laboratory in R uh France so it was like a 5,000 miles distance 162 milliseconds trip time and so we configured the buffer to around four times this so around 700 milliseconds and you can see from the plots the first plot shows the art round trip times uh uh the round trip times calculated by the receiver you can see some jumps uh some some changes that happens in an unmanaged public network uh the second plot shows the packages that were not received by the first time on the SRT receiver and the third plot shows the RRM packages and this means that all the Lost packages were R transmitted were received and the the fourth plot you see no drop it packages so we Could reconstruct 100% of the packages on premises using uh a fine-tune combination between arq and FC on the SRT lib and this runs for eight days so we left this running really really long time to see if it really would work in a real world uh situation the other test was a little bit easier from from a compute perspective and and and run through time perspective it was s Paulo region in Brazil and my laboratory in in in itba it's just 8 milliseconds 44 miles from the actual data center so I could reduce the buffer a little bit but there's still could be reduce it a little bit more but I chose to work with 200 milliseconds same results SRT you can see a little bit better rtt profile on on the first plot again lost packages retransmitted packages and no packages uh using the SRT lib same thing with wrist same uh good performance so same idea all the missing packages were recovered uh no lost packages at all and we were able to cross reference that with the modulator uh logs no problems were uh found on that on that site um so even if we are using such kind of failable protocol What's happen if one sttp packet is missing it's important to understand that on the modulator side missing one packet will be like a disaster because the modulator will break its synchronization with the Gateway with it's sending thep especially regarding what I said previously the sfn part which is a timing information to deliver the content over the a so the modulator will stop and restart the synchronization even if only one packet of sttp is missing right after the modulation you have the amplifier stage on of your transmitter according to the power of your amplifier it will take from one second two three four five second to restart all the amplifier stage because an amplifier and transmitter will not start immediately all the amplifier stage so you have between three and 5 Second depending on the power of your transmitter restarting of the transmitter and then on the receiver side the uh TV set at home or the set box will desynchronize and resynchronize due to this stop and restart of the a signal it will take let's say one second on the uh on the receiver side so that mean that one missing packet on the FTP delivery is around five second of blackout of your ARF channel so all the services which are inside this ARF channel will be black out during this 5 Second which is very important in term of let's say quality of delivery so one things we were think about is due to the fact that we have this Broadband capability of delivering the same content thanks to atsc3 um will it be possible to use this Broadband let's say link to get the same signal and the same content for the services so that's could be possible so the first point we need to be on to unsure is that this service this content is available on broadcast and Broadband simultaneously so what explained bis is that the same content should be delivered over these two different network exactly the same the same Dash segment ETA we need also to ensure that on the atsc3 signaling level the receiver knows that if there is no more access of the content over broadcast how to do to get the content over Broadband it's there is some already signaling information in ats3 but nobody already made this test in real time in real life um there are some Tri uh information that you need to insert at different level of the signaling in the SLT and the SLS this is what we propose to do is to make this signaling information to allow the receiver to get the service first on the broadcast and if there is a failing of reception fall back to the broadband and of course if the broadcast is restarting go back to the broadcast to have the broadcast as the main reception for the receiver and we need also to ensure that the receiver has this capability it's not easy for a receiver to decide when there is a failing of reception in one way how to switch and seamlessly to have no glitch on the video and to have the seamless switch to broadband and vice versa so it's not only a description of this capability on the signaling level it has also to be implemented on the receiver side so if we succeed to have such kind of let's say automatic switch between between broadcast and Broadband what are the advantages of course it has this possibility to to make this failover capability if the RF signal is is failing um that's another possibility if you have just during a temporary situation where the RF is not received in a car for example you can switch to broadband but that's also an opportunity for broadcaster to dynamically decide which content can be put on broadcast and which content can be put on Broadband typically if there is a specific event on a on a channel and the broadcaster decide to put it on broadcast because lot of people will watch it you can imagine that dynamically the broadcaster will put this service on broadcast and at the opposite he can also decide one thice to be less popular for a period of time to go back to broadband only and on the receiver side and the end user will not see the difference he doesn't know where the service come from or there is another Ida if you have a for event on a specific Channel and you want to deliver it you don't have enough space on your broadcast over the system because you have other channel you will switch temporarily this additional channel to broadband to keep the broadcast 4K service on air during this event and then you will fall back to broadcast when when it be done so that offer also a lot of flexibility of the broadcaster in term of allocation of this over the RF Channel but come back to our situation if we have this sttp failure during let's say one packet as I explained previously it's probably 5 Second of blackout so if all the receiver has to fall down to the Broadband access to get this content during this 5 second because that that's what we expect what could be the consequences so we took and I will let brce explain for the CDN part but we took an example which is really the worst case it's to poo um T poo you have around less little bit less than 7 million of people watching TV over the air and imagine that all these seven million people are watching the same content at the same time and unfortunately you have this five second of break of RF channel so all this will have to switch automatically to the Broadband access and that is where this computation come from yeah this is uh if you see the calculations uh a 5 megabits per second service uh would be being tune at the same time by almost 7even million devices which is the situation at the S Pao metropolitan area would gives us approximately 21 terabytes uh of data and those 21 53 uh 56 terabytes needs to be served by the CDN in a very short window like a 5-second window and this translation in in terabits per second into about 35 terabits per second which it's not uh for a big CDN is not a problem but the thing is we are working with the overhead of the Baseline the CDN is there serving but you will require for a short period of time an overhead of 34.5 terabits per second which in some regions is not uh available it's not a reality so as you can see the power of the RF in terms of data delivery it's it's much uh capable than the CDN so what would be the strategy so normally the cdns in our case uh Cloud front does have a a hierarchical caching approach so those requests are not hitting the origin they are actually hitting the CDN but when we call CDN we are talking about a combination of levels so you have the edge location that are closer to to the viewer very spread out then those Edge locations will actually uh try to get the the content from the Reginal Edge caches which are bigger pops with bigger uh storage capacity and then those Regional caches will will go up to the to the origin so uh there is also a mechanism that as soon as the first request comes and the first bite is served for the first viewer uh in in uh while the file is uh flying to the cach it already can be served it doesn't need to stop to complete the first uh segments Das segment uh to be downloaded to the edge location it can be served while it's copying but even though uh a multi CDN approach will be required in that case because you need to combine the capacity of multiple CDN in order to provide it will not be just the CDN the actual Network the ISP network uh the bgps all the actual underlying internet providers need to catch up with with that uh uh with that that burst so uh it's a challenge it's something to to to study and verify how how this could could work but in terms of considerations we uh documented this develop this whole deployment uh we shared with manufacturers uh the findings for for future uh product enhancements uh we validated the use of multicast between those components in the cloud which multicast in the cloud is not um normally allowed because of uh scaling and security reasons you have to explicitly say that you're going to use multicast and and and configure a network that way uh we validated that arq Protocols are feasible solution to transport thep without errors and this handoff between broadcast and Broadband is somewhat defined in the in the standard but more testing and implementations are required to actually validate that this would open those opportunities on including uh Target advertising if you would like to switch to broadband uh during the commercial break for some specific reasion let's say using DNS uh mechanism so so that was uh the closing thoughts uh thank you very much we could take some questions uh here as our contacts thank you very much thank you so missing packets quite a problem if it goes missing um but you've got the receiver verif checking whether it missed a packet and requesting it after that request packet gets lost then it will never get the resend why not have the receiver acknowledge every packet that comes in and have the transmitter realize a packet was not acknowledged and then proactively resend that one so in reality the receiver will receiving from the air a manifest and files and inside that manifest there are files that are uh hosted locally on on the setup box and on that manifest there's also a URL that is pointing to the CDN so actually it's not the transmitter that is resending is actually the receiver will need to look at the Manifest when he doesn't find the file that didn't arrive because of RF problem you need to go to that URL ask the CDN for that segment and then put that segment in in let's say in the queue and then play it out so the the complication on the implementation if you want you complete yeah the the complexity is how to declare that the service is both available on on broadcast and Broadband um and in atsc3 they Define how to deliver broadcast they Define how to make signalization of broadband service they call it a virtual Channel but they did not really finalize the way to to declare how servic is available on both broadcast and broadband and which one is a priority and of course we require inside the setup box a little bit of uh buffering in order to have time to make this HTTP request and add the missing segment in in the in the sequence so it's more on a receiver and uh implementation challenge I think good morning John ferder Quest us uh thank you very much for your presentation I was wondering from a practical standpoint I was very interested when you were describing your failover scenario and my question is have you simulated a failover if so how many times did you do it and how consistent were your results um especially compared against your um computations I didn't get the last part yeah I heard the first part which should be uh did we have simulated the failover unfortunately not we have the failover um we have simulated the failover because we are able to um add the transmitter side to lose of packet if we want but we have we were not able to simulate the capacity of the receiver side to go to broadband to get the uh the missing packet or the missing Dash segment so that what I just explained it's um it's potentially described in atsc3 but as for today as I know there is no receiver able to make this seamless switch and that's something for me that could be interesting to study and to propose to atsc reads to make additional let's say plug pH to understand if it's a solution that could be interesting for the broadcaster and how the receiver should uh made this failover mechanisms in case of missing or bad a transmission but yes um we did not succeed to have a receiver which is able to make this Broadband broadcast to broadband access over here I Richard so um in in in in implementing what you're describing is there any necessity to make any changes at all in the stlp my understanding from your presentation was that there is not so then the follow one question to that is is there any implication when a station operates with a single frequency Network and transmitters are transmitting at different times in in order to do Network shaping does that have any impact on what the CDN has to do in order to make up for the missing data I will answer the first part foring the stlp so for me know there is nothing to change on the sttp everything which is defined on the sltp is good enough there is nothing to CH we we we had a question when we started the test is there is the FC capability on on the stlp and there is also stlp on the SRT level so what is the best is it to use the sttp on the sorry the for acation on the sttp or to use the further correction on the transport SLT level finally we decided to use only the FC on the SRT level because SRT is managing the transport itself so we were considering that it's better to put the FC at the transport level than at the data level but that's something we can try on on the second phase is to mix also FC on both on both level so no there is nothing to change on the sttp uh what I understand what I have to be validated for me is mostly the signaling of the services uh availability on both network but nothing to do on on the S but skip I did not understood the the second question sorry can you repeat uh if if the state if the broadcaster is using a single frequency Network and that adjusts the time of the trans transmitters emission times of the individual transmitters in order to do Network shaping does that cause any uh need for change in the re in the repair uh delivery through the CDM or any additional complexities in the repair delivery through the CDN uh I don't think so because the um um the sfn will will be let's say will not be broken by the fact that one transmitter will continue to deliver the the stlp and the signal on air the second one will stop and restart so during this period of time this one will continue to broadcast and the um when this second one will restart then the the coverage will be increased but the sfn will not be broken because the this is the same sttp which is delivered to both to synchronize for the sfn so for does will be no issue well it would depend on whether or not a particular receiver is receiving from multiple transmitters or a single transmitter but the transmitters would could very well be emitting at different times offset from the bootstrap reference submission time and so the question is does that have any impact on the delivery of the repair packets through or repair files through the CDN um I not able I'm not sure I'm able to answer that probably we need to to have a deeper it might be something worth thinking about investing thank you than you very much gentlemen very great session Info Notebook0 toggleRightPanelIcon ATSC 3.0 from the Cloud 10/24/2024 [youtube.com](https://www.youtube.com/watch?v=sOHBzmyiC64&t=3s)Copy [![smpteconnect](https://read.readwise.io/assets/person.836cbecf.svg) smpteconnectCopy @smpteconnect ](https://read.readwise.io/filter/author:%22smpteconnect%22) Summary ![](https://read.readwise.io/BabyGhost.svg) ATSC 3.0 from the Cloud with speakers Boris Kauffmann and Richard Lhermitte METADATA [ Type Video ](https://read.readwise.io/filter/type:video)[ Domain youtube.com ](https://read.readwise.io/filter/domain:%22youtube.com%22)[ Published Oct 29th ](https://read.readwise.io/filter/published:%222024-10-29%22) Length 32:00 [ Saved 2 minutes ago ](https://read.readwise.io/filter/saved:%222024-11-06%22) Progress 100% [ Language English ](https://read.readwise.io/filter/language:%22en%22) Edit metadataHelp successHighlight deleted Undo ![ghostreader ghost body](https://read.readwise.io/Ghost.svg)![ghost glasses](https://read.readwise.io/Glasses.svg)