Hacker Newsnew | past | comments | ask | show | jobs | submit | kvm000's commentslogin

In my experience with a few big broadcasters like Paramount (previously Viacom) and Discovery, for broadcast in Europe/UK the signal they generate often has a mix of Teletext and/or DVB inserted based on the channel since those signals are distributed to a LOT of partners like local satellite and cable companies who can decide which parts of the signal to map into their system.

In that context, “teletext” just works the same as the North American 608 captions and has nothing to do with the older full-screen data stuff. There are no restrictions around authority for “teletext equipment” - for those channels they actually use systems fully based in AWS with the playout engine running on an EC2 instance so all software-only to generate the single MPEG transport stream with video/graphics + audio + captions + subtitles.

It’s also common to send a DVB subtitle multiplex that has a number of languages (up to 20+) embedded in the same signal.


I was under the impression the "teletext" style ones still used teletext encoding, not EIA-608? It's basically just a data steam containing a single page of teletext rather than a whole magazine? AFAIK Sky Digital uses this approach (at least for SD), with a more modern looking decoder, and it certainly has colour support at least[1].

[1] Although DVB can contain a full teletext stream that can be reinserted into the SD analogue VBI by the receiver. Sky boxes supported that so on some channels you could just go to ye olde page 888, although I haven't a clue if any channels still do that (I don't have an old SD Sky setup around to look).


That's correct - 608/708 is North America only, and UK/Europe could use OP42/47 teletext for simple captions (not whole magazine), and/or DVB (mostly language translation).


To be fair, I'm talking about the old PAL-based teletext and not about DVB teletext. Also, computers are now everywhere so the 888 page can be inserted with minimal effort unlike back then that it requires coordination with different teams.


I was talking about the older PAL-based teletext as well - but you're right, I understand it was different in terms of equipment originally with the full-screen data feeds.

Separate from DVB, the old PAL OP42/47 teletext payload gets inserted into the MPEG-TS using SMPTE 2038[1] then when decoded, it would go into regular ancillary data of the uncompressed stream per ITU-R BT.1120-7[2].

The broadcast industry really loves standards[3].

[1] https://ieeexplore.ieee.org/document/7290549

[2] https://www.freetv.com.au/wp-content/uploads/2019/08/OP-42-C...

[3] https://xkcd.com/927


Live closed captions (text only) is very common and standard. Usually those are done by an external company listening in to an audio feed, and sending the data back. It used to be done with regular POTS lines and telnet, but now it’s obviously more common to use public internet based services like EEG iCap[1]

I don’t know too much about it but I had read recently that ASL sign language can be thought of as a different language, rather than a direct equivalent to text subtitles[2].

[1] https://eegent.com/icap [2] https://imanyco.com/closed-captions-and-sign-language-not-a-...


> I had read recently that ASL sign language can be thought of as a different language

Yes, it is a different language. I've heard that ASL is rather similar to French Sign Language and quite different from BSL (British Sign Language). If someone were to translate something from English into ASL, and someone else were to translate the ASL back into English, I'd expect the result to be as different from the original as if they'd gone via some other language, like Italian, for example.


There's a variety of running jokes that Italian is half sign language anyways.

(Apparently derived from the fact that in Italy, there's quite a lot more non-verbal communication with hand gestures than other parts of the world)


The older “608”[1] system in North America was much simpler but the current “708”[2] standard does support specifying colours and fonts, but in my experience in the industry nobody uses those functions at all and just uses the 708 function to embed the older 608 payload data within the newer 708 data structure.

In UK/Europe the older/simpler format would be OP-42/OP-47 Teletext[3] which can be used for captions instead of the full-screen data pages, or DVB Subtitles[4], which get into more uses around “subtitles” in terms of language translation, rather than only the “closed caption” use case where it matches the content language. DVB subtitles can be sent as pre-rendered bitmaps or as data for client-side rendering.

[1] https://en.wikipedia.org/wiki/EIA-608 [2] https://en.wikipedia.org/wiki/CTA-708 [3] https://en.wikipedia.org/wiki/Teletext [4] https://en.wikipedia.org/wiki/Subtitles


Yeah, I know of the "708" captioning system but it is surprisingly underutilized* by broadcasters. I think that they don't see any use for e.g. color?

* in terms of 708-only features, not on the pedantic "ATSC uses the 708 system"


There are very strong lobbying groups that push for accessibility in terms of captions (as well as the “DV” described video audio track) but my impression is that their focus is on the quantity of content that’s covered, and the quality (spelling, time-alignment), and I guess they don’t care as much about text styling.

The requirements are quite high in Canada[1] and have been expanding in the US as well[2].

The company I work for makes products for broadcast customers, around asset management, linear playout automation, and the playout servers that insert the captions (from files or live data sources) so working out how that all happens is part of every big project.

[1] https://crtc.gc.ca/eng/info_sht/b321.htm [2] https://www.fcc.gov/consumers/guides/closed-captioning-telev...


708 is generally used for HD and 608 for SD. Until we rid the world of standard definition broadcasting, 608 is here to stay.


The company I work for is in the SDN space but specifically around large scale uncompressed (SMPTE 2110, 1.5Gbps per HD stream) video broadcast IP infrastructure rather than related to docker containerization.

We've been deploying larger and larger systems based around our hardware IP switch fabrics (EXE/IPX) and using our SDN controller (https://evertz.com/solutions/magnum) to manage network topology for systems over 150,000 multicast flows with television broadcast critical timing/latency on stream switching.

We're hiring! https://evertz.com/about/careers/


Sorry if this is a dumb question but what does

> with television broadcast critical timing/latency on stream switching.

mean? Is this minimising the time it takes for the channel to change once the person sitting at home has clicked the remote? Or is this for live broadcasters trying to mux multiple raw video streams into a single watchable stream? Or something else?

If the former, what's special about that case over some normal network box?


The type of system I was referring to is the core within the broadcast facility, rather than extending to the home viewers (which is a separate downstream distribution encoding/mux process). Timing is important for when you switch between different video feeds to avoid on-air impacts so all the signals have to be video frame-aligned basically. In an IP system PTP timing is used to lock all the devices since greater precision is needed than NTP can provide.

Low-latency is also important for when the SDN system initiates a route to switch since it generally needs to happen according to a tight broadcast playlist schedule and we found that with SDN (central controller telling all the switches what to do) rather than IGMP type system (multicast subscribe with switches doing a lot more work parsing packets) is the best approach to handling very large volumes of routes and having them take effect in a low-latency predictable way.


Thanks for clarifying, I think I get it. It almost sounds a bit analogous to an RTOS where you dictate your deadlines to ensure they are met rather than letting everything run wild and hope for the best.


Would be nice if there was a more granular option for access to photos which allowed "image only" with no metadata... I assume most people don't realize when they grant photo library access to a cheap filter app that the app can grab the datetime and GPS location for all photos as well, which is a lot of data if they have a phone full with a ton of photos.


Challenges are mostly around keeping lock-step timing across a lot of separate devices in the system at scale (using PTP) for frame-accurate switching, and working through getting the SMPTE-2110 standards completed for interop, but there huge benefits on ability to scale to well beyond what is practical with coax, as well as cost savings on cabling with more density (and weight savings, for mobile production trucks which is important). There are also big architectural advantages of having more dynamic infrastructure and device discovery (AMWA NMOS, etc) rather than hard-wired coax signal paths.

There are a lot of scenarios with massive scale on the back-end, with lots of streams managed as part of the production process ahead of delivery to consumers... We have an IP video router (https://evertz.com/products/EXE-VSR) that has 2,304 10Gbps ports (moving to 25Gbps per port), each of which can do 6x fully uncompressed SMPTE 2110 video (1080i 29.97fps) flows in each direction, with a 46Tbps non-blocking back-plane so the scale can get a bit crazy.

The full scale back-end stuff is pretty invisible from the consumer side. Eventually that internal infrastructure feeds into distribution encoders that produce lower bitrate streams for cable/sat/web distribution (for real-time events, with separate file delivery for VOD platforms like iTunes & Netflix)


My company is heavily involved in this area... there is a migration underway from baseband (https://en.wikipedia.org/wiki/Serial_digital_interface) to fully IP based infrastructure for routing and signal flow.

With IP, it's usually a software defined network model, along with some IGMP components for controlling video flow.

So the clients (software or control panels) would send a request to the routing orchestration system to request a route to send multicast flows (SMPTE 2110, separate multicast for video, audio streams, ancillary data) from a source to a destination, same as with a baseband router. With IP, the orchestration layer also drives the receiving device to join the multicast explicitly if it isn't IGMP based.

This is our SDN orchestration; https://evertz.com/solutions/magnum

(we're hiring!)


Your employer amongst other dinosaur hardware manufacturers is also making this a nightmare to implement in software because of the tiny packet burst requirements (40us). My team will spend thousands of man hours and our servers will waste huge amounts of energy because power-saving can't be turned off in order to hit these crazy requirements.

In 40 years time we'll be still having to comply with this nonsense because some manufacturers didn't want to update their FPGA designs. It's like fractional frame rates all over again.

This is really where the broadcast industry lost its collective mind.


We also do full systems purely in software at large scale, such as this one; https://evertz.com/resources/press/Discovery-Cloud-Oct2017.P... ... fully virtualized in AWS, not just for OTT but also full linear cable/sat broadcast channels.

There are valid use cases for both hardware solutions where appropriate, and software solutions for other use cases. There are many low latency use cases where crazy requirements are still actual requirements.


Conveniently those requirements exactly matched the buffers of legacy FPGA designs. Funny that.


I'd be very interested in any details of your approach that you could share. I added another comment to this thread with the details of what I've been working on for professional spec documents with markdown and rendering to PDF (testing with CSS now).


It's relatively low-tech, but it does work. First, I have some scripts to preprocess the Markdown and extract custom markup that I have built to do things that Pandoc's Markdown can't, but that I can still easily inline in the document. For instance, I replaced the source code markup with a custom one that generates LaTeX formatted the way I want it formatted. I use Pandoc to output LaTeX. I then have a series of shell scripts to postprocess this LaTeX to insert pre-rendered figures, re-introduce the custom markup as rendered LaTeX, or to massage the output so that the document is laid out the way I expect.

I'm planning to use this to write a book in the near future.


This is great.

I write 100-200 page functional spec documents at a vendor for large scale file based broadcast systems (10-300 linux servers) for a number of customer projects, and have been trying to get away from word since it's slow at that scale and I don't want to spend any time on formatting, and want to get to a templated approach for the others on my team with a consistent output.

Currently I'm using markdown with some CSS and just use Marked2 (Mac) to export but don't have it all worked out yet. Markdown + LaTeX + Pandoc is probably better and more powerful or precise than using CSS. I don't use equations but I do use tables a lot and I'm using multimarkdown ascii ones for now (with a nice atom.io auto-format plugin to make it easy to author) and some code blocks with syntax highlighting.

The idea is to have a folder per customer/project spec, with a consistent structure of one .md file for the body and a local sub-folder for images (svg workflow(bpmn)/system diagrams mostly, some jpgs for logos, screenshots). The folder would be in a local git repo so we can commit changes and export diffs to see what changed between versions and have multiple people work on the same doc with tracking.

I'm using "invisible links" for in-line comments at the bottom of each section that are added while going though it with the customer since it usually takes 5-30 versions before it's finalized and signed off. Those keep a record of discussion with the customer and don't get rendered out in the final output. Also using standard set of status tags (@outstanding, @done, @info) within the comment text.

Ex; `[Note: <initials> YYY-MM-DD]: # (comment text @status-tag)`.

Going through the in-progress spec with customers and typing notes inline has been much better than word's commenting system and using markdown makes it easy for the customer to read without extraneous formatting code in-line.

Currently using a multimarkdown header for variables; customer name, project name, author, author email, spec version, etc., but I might move that to a separate YAML file.

Ideally, to make each version of the spec it would be markdown through LaTeX/Pandoc to render a PDF with;

- Title page generated automatically using variables (multi markdown header or separate YAML) - Automatic Table of Contents - Automatic header numbering (h1-h6) - Automatic header/footer using variables & auto page numbering - Ability for basic control of image size; 80% width (svg), original pixel resolution (pngs), etc., positioning. - Ability to have global paragraph numbering in sidebar that the customer can reference while discussions are ongoing, and turn that off for final output to PDF.

I'm going to spend time with the examples from the original link to try to work that all out but any suggestions or tips would be greatly appreciated. I'd be happy to post an example of the final template and write-up of the approach on GitHub.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: