As I mentioned in my column two weeks ago, it’s a Golden Age for TV and films. But it’s not so golden if a hearing or visual disability bars you from enjoying that content.
So here’s some good news: Toronto is poised to be a Canadian, North American, and perhaps even global centre of excellence in narrowing the disability digital divide.
Ryerson University’s School of Continuing Education is now offering a series of weekend courses beginning in January for those who want to learn or enhance their skills in inclusive media such as live closed captioning and audio description/described video. This is the first dedicated course of this kind offered in Canada, and joins degree and postgraduate programs in Europe in the field of Audiovisual Translation.
Following the Canadian Radio-television and Telecommunications Commission’s recent Let’s Talk TV initiative, the commission is asking broadcasters to massively increase the amount of described video: as of September this year, all prime-time scripted content on large broadcasters needs to be accessible.
Most of us are familiar with closed captioning. Described video is even more remarkable: from the CRTC website, it is “a narrated description of a program’s main visual elements, such as settings, costumes, and body language. The description is added during pauses in dialogue, and enables people to form a mental picture of what is happening in the program.”
As Joel Snyder, author of one of the texts used in the Ryerson course says: “Audio Description is a literary art form in itself. It’s a type of poetry – a haiku.”
In a world of increasingly powerful artificial intelligence technology we need to think about which jobs are going to be most affected by AI.
Great progress has been made recently in transcribing human speech to text in real time, which could be bad news for human closed captioners. A recent transcription app has been described in a review as “amazing.”
But that needs to be put in context: like the proverbial dancing bear, the amazing bit is not that these programs transcribe perfectly, it is that they do it at all. From the review: “It’s far from perfect — the app doesn’t get every word yet, though with clear speech and little background noise we’d say it’s in the high 90s in terms of percentage.”
That’s great for me dictating a memo to myself at home, but closed captioning for TV and film is always going to have multiple speakers, talking over each other, some with accents, mumbling and shouting and whispering and with lots of background noise. For many years to come, humans will be better closed captioners than computers. And for audio description, not only does AI suck at poetry, it doesn’t really ever “understand” what it is looking at, so while it could describe every element in a given scene, it has no ability to pick and choose what to describe — and what to leave out.
Just how many jobs are we talking here? Not a lot — hundreds, not thousands. But this won’t be just for Canadian TV and movies: as of 2014, Toronto was the third-largest screen-based production centre in North America, and the streaming wars are just getting started with more than $42 billion in production slated for 2020.
Just as one streaming example, Netflix has more than 5,000 titles in its U.S. library, and at last count only 1,063 have audio description in English, and only 430 have audio description in any of 33 languages from Arabic to Ukrainian. Add in streaming offers from Amazon, Apple, Hulu, HBO Max, NBC and CBS, and the need for trained describers and captioners is clear.
Why can’t Toronto become a global centre of excellence for captioning and describing? Low dollar, screen expertise galore, multilingual and a workforce now properly trained by a respected media academic powerhouse.
Original Source: Toronto Star