top of page

Celebrity Interview: Jorge Díaz-Cintas

Laureate of the 2014 Jan Ivarsson Award, Founder-Director of the Centre for Translation Studies (CenTraS) at University College London and author of numerous articles and books on audiovisual translation, Jorge Díaz-Cintas is known to many as one of the most prominent figures in the world of subtitling. Former president of ESIST and member of the TransMedia and Trama groups, he combines teaching, research and freelance work, and tries to keep abreast of the ever-changing industry.

Jorge kindly agreed to give me an interview, in which he shared his thoughts on the industry, academia and technology of subtitling, talked about Netflix and the positive change they brought about, and speculated on the future of this field.

Below you will find our tête-à-tête.

The Industry

Max Deryagin (MD): From what I’ve read, in addition to teaching and research, you also work freelance. Can you tell me a bit about it?

Jorge Díaz-Cintas (JDC): I first came to the UK in 1989. In the early 90s, I was asked to do subtitling by a company called Intelfax. I had done other work for them prior to that, so they said, ‘Why don’t you help us with subtitling as well?’ – and I agreed. It was the beginning of the DVD era: I subtitled extras, VAMs, music DVDs like ABBA, U2, Riverdance, things like that. Occasionally I worked with subtitle templates, but not very often, because at the time it wasn’t the thing to do – you’d subtitle to your language from the video clip using your own piece of software.

These days I rarely do subtitling or translation due to the lack of time and the pressure of the industry, because everything needs to be done so quickly now. Once in a while I do corporate videos, usually with templates. I also do quite a bit of interpreting as a registered Home Office interpreter in the UK, although I’ve never written or done any research on the topic.

MD: For how long have you been freelancing?

JDC: I started soon after finishing my university degree in 1989. Around 1994, when I came back to London from the European Parliament in Luxembourg, I began working more firmly with Intelfax and other companies.

MD: So, more than 20 years of experience. Quite a bit. Now, I keep hearing from subtitlers that the rates offered by companies have been steadily decreasing over the years. Have you noticed this trend?

JDC: Sadly, yes.

MD: What do you think is the reason for that?

JDC: Well, there are several reasons, not just one, and you have to look at the issue from different perspectives to get the full picture. Back in the day, companies were interested in doing business with local specialists living in the country where the company was based. Those were the early years of the DVD era: you’d go to the company offices, check the video files, pick up the work materials and take them home – it was a much more personal approach, and they’d pay you according to the country’s standards. But then globalization and decentralization came, and companies started looking for people working in ‘territories’, as they called them – and that’s when the rates started going down. They realized that they could pay much less to translators living in countries with a lower standard of living, since all the work could be done through the internet.

Then there’s also the technological factor. When I first started, I remember people would subtitle around 20 minutes of video in 8 hours, and that was pretty much the norm. Some were even proud of being able to manage that speed – and I’m talking about SDH, which takes less time, since it’s transcription rather than translation. Now, with technology being so pervasive, people can work much faster using new subtitling tools and automation, and faster turnarounds mean lower rates for certain tasks.

Then there’s also the eruption of subtitle templates in the industry. From the early 2000s, companies started to do away with the technological dimension: now their translators had to do the linguistic transfer only, with no spotting required. And, based on that, companies decided to reduce their rates with the pretext that, ‘you don’t need to do the spotting or to know how to use the software, the ‘only’ thing you have to do now is just translate the subtitle text, so we’ll pay you less’.

And then, there’s also the sex appeal of AVT: many people find it more attractive than other branches of translation, because they think that a subtitler’s job is to watch films and get paid for it. When the Hermes test by Netflix was announced, the newspapers presented it exactly like that – as a fantastic job for film enthusiasts, an opportunity for people with some knowledge of a foreign language to have fun and make money. Of course, this misconception works against the industry, since it projects the wrong image, deprofessionalizes the figure of the subtitler, and acts as a magnet for amateurs: some people will work for almost nothing as long as they can do subtitling, which they would never even think about in other careers. Certain foul companies exploit this and make a habit of hiring students and aspiring translators for their subtitling projects, so that they can pay them less without worrying too much about quality.

MD: And they get away with it, because the end client often has no way of checking how good or bad the subtitle files from these companies are. This is why I was excited about Jan Pedersen’s FAR model for assessing quality of subtitles.

JDC: I am also very excited about Jan’s FAR model and, despite the level of subjectivity that surrounds any translation, it’s certainly a step in the right direction. There’s a confluence of reasons why the situation is the way it is. For many years this degradation has been felt bitterly by many translators. I co-organize several international conferences like Languages & The Media and Media For All, and you could see that it was that sort of frustration that was suffocating the debate. The outlook for the profession didn’t seem to be sustainable: rates kept falling, so the best subtitlers couldn’t afford to continue working and were leaving in droves. At one point it hit such rock bottom that the industry had to recognize it and accept that the situation was not manageable – it couldn’t continue like this. Ironically, sometimes it takes companies more resources to train novice translators to do quality subtitling than to pay professional subtitlers a decent rate, and I think the industry has started to change its ways now. Perhaps I am a bit of an optimist, but I feel the situation is getting better. As you know, new players like Netflix have been interested in raising the quality standards, and to get there they’ve established a dialogue with professional associations. With their Hermes test, they’re trying to get a clearer picture of who’s who in the field, to find out the sort of expertise that people have, and to validate somewhat their subtitling knowledge. Hopefully, measures like these will help increase the rates. It’s still too early to tell whether this approach will have the desired impact, but people seem to be very positive.

MD: We’re trying to combat this trend of dwindling rates in AVTE, dealing with the bad apples in the industry and trying to educate people on subtitling, but it’s a tall order. Do you think Netflix have improved the situation through their efforts?

JDC: I think so. The great thing about Netflix is that they’re willing to listen to professional subtitlers, which some of the other big players have avoided doing. I’m sure these companies also care about the quality of their subtitles but, so far, they’ve been absent from the debate. For that, I think Netflix is a breath of fresh air that we should embrace – and try to make it work.

MD: Yes, I think they’ve been a huge game-changer. The industry has gotten healthier owing to their work, which we, subtitlers, appreciate very much. Now, another thing that in my opinion could help the industry is certification for subtitlers. The Hermes test by Netflix was a good shot at that, but it was seriously misadvertised by the media, which presented it as an easy way for everyone to land a dream job, to make good money while watching Netflix shows. As a result, Netflix got too many applications and couldn’t process them all quickly. So, in this respect, do you think it would be possible and reasonable to create a full-fledged subtitling certification, rigorous and high-profile, maybe similar to DipTrans but for screening professional subtitlers?

JDC: This question has been raised before. If I’m not mistaken, the CIoL in the UK at one point were thinking about adding subtitling to their battery of tests, but it never quite materialized. There’s this problem: How official can this certification be? Who has the authority to verify and confirm the test results? How well-perceived will it be in the profession? In principle, anyone can come up with a certification, but unless you can command authority, no one will take it seriously; it won’t have credibility in the industry. So, how do you earn their trust to create a certification that’s accepted by the stakeholders? And also, if someone has done a postgraduate course in AVT, why would they need another certification? The benefits will have to be very obvious for people to embrace such a system.

 

But beyond that, there’s another problem: the wide spread of languages that the accreditation system would have to cater for. In this age of globalization, some companies now subtitle to and from nearly every language in the world: from Vietnamese to Thai, to Indonesian, to Russian, to Korean, etc. For some of these languages there are no university courses in subtitling and no education centres where people can be fully trained. I recently did an interview with colleagues from a Chinese university, and it was clear that they are slowly but surely awakening to the reality of audiovisual translation. But they still have a long way to go – and this is China, a country with more than a billion people and hundreds of universities and programmes in translation! For other languages, like Mongolian or Vietnamese or Thai, how are we going to assess the competence of subtitlers? Finding the relevant experts qualified enough to vet other people in all these languages can be extremely challenging – if not impossible – for a single accreditation body.

Now, all that said, I’ve been talking to people in the industry that also think the time has come to try and develop a certification system for subtitlers and other audiovisual translators. How to proceed, that’s the big question.

MD: Well, I hope someone steps up to the plate, and I also hope the Hermes test recovers and achieves its initial goals. Now, let’s switch the topic and talk about subtitling standards. Are there some conventional, universally accepted standards that you don’t quite agree with?

JDC: (laughs) Oh, of course.

MD: Tell me.

JDC: It’s interesting how some people in the industry and academia are really attached to their own beliefs and their own ways. For instance, some are adamant that viewers cannot read more than 12 characters per second (CPS) on screen.

MD: Oh yes, I’ve heard that one before.

JDC: This reading speed was originally recommended decades ago, well before digitization, when computers weren’t around and watching video on screen was not as widespread. I understand that you need to give the viewer enough time not only to read the subtitles but also watch the images, but the problem is that we haven’t conducted enough experimental research to see whether viewers will be able to comfortably follow subtitles beyond 12 CPS. How do people that come home from work to watch a subtitled programme react when confronted with presentation rates of 12 or 15 or 20 CPS? We simply don’t know. Would 22 or 25 be too fast and unpleasant? Would they notice a difference between, say, 13, 14, 15, 16 or 17? Personally, I don’t fully understand this fixation on a given number of characters per second that’s been perpetuated in the profession. I’d suggest that more empirical research is conducted to find out about the likes and dislikes of the 21st century audience.

Another area that needs more research is shot changes in subtitling, including the idea that when a subtitle crosses a shot change, the viewer might move their eyes back to the beginning of the subtitle and start reading again. Traditionally, classical films were very strict in their montage, and subtitling around shot changes made more sense, but these days film editing is much more dynamic. I still think you need to be careful when subtitling around shot changes, but perhaps it shouldn’t be as rigid as people believe. With some guidelines, you almost need a PhD to apply them properly: if it’s seven frames off, do this, but if it’s eight, do that, and after the shot do something else... It makes you panic when you see a shot change: ‘Okay, stop. Count: how many frames? One, two, three, four, five… eight! Or was it seven? It’s borderline! What do I do?’ I think further research on how to subtitle around shot changes would be most welcome.

One other topic that interests me is that of blasphemous language in subtitling. This systematic shying away from swearwords and blasphemous expressions – I find it disturbing. If the dialogue contains something like that and there is enough time and space available, why wouldn’t you translate it? People think such expressions are going to be too disruptive, that viewers will be upset when they read them on screen because their impact is much stronger when written than when spoken, but I’m not sure to what extent that’s true. In my opinion, subtitles tend to be too prudish and depart too much from what we’re actually hearing. We’ve taken the responsibility of gatekeeping the decency of language on behalf of the viewers – we avoid translating certain expressions, because we think the audience will find them offensive. And again, very little research has been done to find what type or amount of obscene language people find distasteful. Who knows, perhaps they don’t mind it at all and feel cheated when they can hear swearwords but don’t see them in the subtitle text. So, this dogma of omitting ‘dirty words’ is another one that I’d like to dispel – or at least find out more about it from the audience’s perspective.

MD: This self-censorship I have experienced first-hand. In fact, I see it in subtitles all the time. And you took the words out of my mouth in regard to the reading speeds and shot changes – I totally agree with you on both fronts. Media consumption has changed over the years, people are more adept at reading subtitles than ever, and the films themselves have become more fast-paced, so it’s natural that the reading speeds in subtitling have gone up. On the subject of shot changes, as you know, there’s this eyetracking study by Agnieszka Szarkowska et al. on whether shot changes induce re-reading of subtitles, and the results indicate that they don’t. Viewers don’t seem to care that much about shot changes, after all. I still think they’re important, and whether you want to cross one depends on its type and the situation at hand, but people seem to be overzealous in this respect.

JDC: I agree, the way some companies do it is too strict, too dogmatic.

MD: Then there’s also this perennial holy war: one-liners versus two-liners. In the Scandinavian countries they prefer two-line subtitles, in some countries one-liners are preferred, but I’ve noticed some people will go out of their way to split every single two-liner. It sometimes leads to comical situations where you have a series of minimum-duration subtitles that appear and disappear so quickly that you can focus neither on the image nor on the text.

JDC: Hm, I’ve never noticed this myself. I know in some countries, like Japan and to a lesser extent China, the traditional approach is to go for one-liners only, and even when there’s dialogue, they will split subtitles in two. Not sure about other languages. I myself prefer two-liners: if I can have more information in one subtitle, especially if two subtitles follow one another – say, with 10 or 12 frames in-between and no shot change – I will merge them together. If you’re a fast reader, you can quickly read the text and have more time to enjoy the images. Too many one-liners call undue attention to the subtitles, which is something we usually want to avoid.

 

The Academia

MD: Now let’s move to questions related to academia. What new developments in subtitling-related research are you most excited about?

JDC: I find some of the new approaches to studying the reception of subtitles quite interesting. They probably won’t help us answer all our questions, but it’s the right step forward. I’m talking about eyetrackers and biometric sensors, which can be used to determine what viewers like and dislike in subtitles and give us valuable information about viewer satisfaction. The industry seems to be interested, too – some companies are willing to find out what their viewers prefer and to adjust their guidelines if they get new, contrasting information on people’s preferences.

That said, one has to keep in mind that it’s quite easy to get distracted by this new technology and lose sight of the ultimate aim. It’s a bit like when you teach subtitling and new students get so hooked up on the technology that they forget about the linguistic aspects of translation. But I’m sure that these new research tools will help us revisit and improve our understanding of the best subtitling practice.

The challenge is that such applied research is not always welcome in academic circles in our field, where knowledge is measured differently. Academics are under a lot of pressure to publish articles, and this kind of research is sometimes considered too applied and not academic enough, which is why some scholars prefer to do research in more ‘traditional’ areas like the history of translation, the representation of culture, the manipulation of texts and so on. Some people don’t think it’s the remit of humanities to conduct applied research, but in my opinion we should collaborate with the various stakeholders in the industry and keep the dialogue going – not only with distributors and LSPs but also manufacturers and developers, including those working on cloud platforms, new software for respeaking, automatic translation engines, and similar.

MD: Could you tell me more about eyetracking and how it's applied in studies?

JDC: The eyetracker is a novel piece of technology in the field of AVT research; it’s a hardware-and-software suite used for analyzing viewer reception. It tracks the eye movements of the participants, so that we know where they are looking when watching a subtitled film, for instance. The results have to be taken cautiously, though, because the only thing an eyetracker is telling us is where the eyes are going, and that’s it. From there, with an educated guess, we extrapolate the information. So, if the eyes stopped in a particular point in the text, it could be because that part was too difficult to understand or there was a mistake, and the viewer stumbled on that word or expression. But, in extreme cases, it could also be that the person was bored or daydreaming and was looking at the screen absent-mindedly, which would corrupt the data – and that’s why the results should be taken with reservations. Still, this is one exciting way to find out how people process subtitles. To make eyetracking experiments more robust, the results are usually triangulated: several research methods of compiling information – like questionnaires or surveys – are used additionally to validate the data.

Now we’re finding that we can improve this approach by borrowing techniques and methods from neuroscience. For instance, iMotions have developed an integrated platform that allows researchers to gather biometric data about the participants. We can combine and synchronize eyetracking with sensors that give us information about brain and heart activity or that analyze facial expressions, which complements and enhances the results of the studies. For instance, we could test if the subtitles are actually funny for the viewers: you can record them watching a subtitled clip and then see whether they find it funny, and if so, whether their laughter coincides with them reading a particular word or expression and with their brain activity in the region responsible for processing humour.

 

Here’s another example of how this technology could be used. There’s this assumption in the UK that British people don’t like dubbing, and there is this sort of mythical project that nobody seems to have ever seen, but it’s supposed to have taken place in the 1980s. Some British participants were exposed to material from a French TV series both dubbed and subtitled in English and, apparently, the results indicated that they didn’t like the dubbed version – and that’s why there is almost no dubbing in this country. Now, the other day I was watching a Spanish film with both subtitles and dubbing enabled in English. The subtitles were clearly made out of the dubbing script, so the viewers using subtitles and the viewers using dubbing were presented with exactly the same content. I think it’d be interesting to test viewers to see if reading, which is supposed to be more demanding, is different from listening, and how exactly; to ask the respondents what they liked and disliked about the translation and see what results we can get. Who knows, maybe some British people do like dubbing after all.

Now, given all this potential, the question is, how interdisciplinary can one be? We know quite a bit about subtitling, but what do we know about the heart response or the functioning of the brain? How do you interpret the data in our particular field? Do we need to work with psychologists or neuroscientists, or some other academics? And it’s then finding appropriate synergies with other colleagues that can prove challenging.

MD: Woah, this is quite interesting. Okay, on to the next question. I’ve noticed in these recent years that more and more research is directed at fansubbers: how they do it, what they get right, what they get wrong, the censorship, etc. Do you think there is something that fansubbers do better than traditional subtitlers, that we, professionals, should adopt?

JDC: Yes, there’s a lot of interest these days in fansubbing and fandubbing in academia, because they lend themselves as a nice area of research – both in translation studies and in media studies. If there’s one thing we can learn, it is being faster, since fansubbers have extremely low turnaround times due to their streamlined workflow. Some viewers don’t want to wait for the official subtitled release of another episode of their favourite show, so they resort to fansubs and, to prevent that, the industry needs to offer their subtitled productions faster. And this is something that we’ve seen in recent years, with producers and distributors releasing all the episodes of a show at once for binge watching.

There’s also the fansubbing practice of using what I’ve called elsewhere ‘topnotes’ – similar to footnotes but appearing on top of the screen – which add extra information to the translation. This practice is frowned upon in professional subtitling, but perhaps it could prove useful in certain audiovisual genres such as educational videos.

I know this will be controversial, but maybe, like fansubbers, we could also try to be more flexible when it comes to certain limitations and constraints and be more creative in the use of subtitles.

MD: Good points! Next question – and this is something I think many people would like to know: Do you have plans for a new edition of your book with Aline Remael, Audiovisual Translation: Subtitling?

JDC: (laughs) I should be working on it right now! Yes, it is in the pipeline. Things have changed enormously in our profession, and the idea is to come up with a new version of the book that will take stock of all these changes. It’s simply the sheer lack of time to work on it! But watch the space. It should be out soon.

MD: Could you maybe share what is going to be new in it, if possible?

JDC: Sure. It’s going to be a polished and revamped version – or maybe even a new book. We’re going to do away with some of the stylistic criteria, because they don’t apply to all the languages – like the use of italics, inverted commas, abbreviations and so on. When we were preparing the previous edition, we knew that tutors and some professionals needed this information, but we realized shortly after – and we are even more acutely aware of it now – that these criteria are quite Eurocentric, if not Anglo-Saxon, and they don’t work in many other countries like China, Japan, Russia, or the Arab world. We’re going to be less dogmatic in this area.

We’re also going to incorporate a bit more on the technology. We’ll continue to use subtitling software, but this time it’s going to be WinCAPS Qu4ntum, Screen System’s latest version, and we’re getting rid of the DVD. Back in the day, when we first launched the book, it was revolutionary: a book with a DVD and video clips on it! Now it is very outdated. It shows you how technology has evolved in such a short span of time. For this new book, the audiovisual materials and exercises will be based in the cloud, on a dedicated website. We’re also going to explore the benefits of cloud-based platforms and, hopefully, will include some exercises, so that readers can get a good idea of how to subtitle in the cloud using an online tool. We’re discussing the project with OOONA, and I hope we’ll be able to strike some sort of collaboration.

Finally, we’re going to add a chapter on fansubbing, since there’s so much interest, and we’ll also add another one on research topics and methodology.

MD: Sounds great! Now, if you can share, what have you been up to lately in terms of research?

JDC: I’ve been working on manipulation and censorship in Spain’s Franco period. I’ve actually finalized an article on a film I love, The Barefoot Contessa, which was heavily manipulated – to the extent that the whole plot was completely different in the dubbed version. I’m also working on the reception of subtitles and, together with some colleagues, we’re developing experiments with eyetracking and biometrics. Plus, I’m collaborating with the industry on defining the quality standards of subtitling and the skills needed by subtitlers-to-be.

 

 

The Technology

MD: Now let’s talk about the subtitling technology. 360-degree video – virtual reality – is rapidly becoming more and more popular: it's on the rise in cinema, streaming platforms, video games and elsewhere. This innovative medium poses a new challenge for subtitling, and some researchers, including the BBC team, have already taken the first steps in finding a solution. What is your take on these developments?

JDC: Some research is being done on this topic, and there’s clearly more interest in the industry than there is in academia. It’s always the case in our field, though – we always lag behind when it comes to new technical developments. Things happen first, and then we look at what’s happened, how it’s happened and what the end result was. In this sense, I fear we’re not being as proactive in VR research as we possibly could. On the other hand, the industry is a bit cagey when it comes to sharing their early results, because whoever gets there first will be the pioneers – and, of course, you don’t want your achievements to be used against you by competitors.

MD: Neural networks for machine translation, speech recognition and captioning – another new big thing. Well, relatively new. Linguee have released their DeepL service for ‘smart’ machine translation; some other companies are already using advanced neural networks for automated closed captioning with pretty good results. Where do you see this go? Do you think machines will be able to replace human subtitlers in the future, or maybe make us all post-editors?

JDC: I don’t think that will happen any time soon but, for sure, things will change. There’s a lot of scope for translation memory tools and machine translation (MT) to come into the field of AVT. We’ve been adamant till now, thinking that they can’t help much in film translation, that they can only work for technical material – and that was probably true until recently, but things have changed in recent decades. For instance, some TV shows span over hundreds of episodes, and automation can help a great deal with consistency, faster term research, recuperation of translations done before, incorporation of glossaries, and so on. Indeed, for instance, the SUMAT project, funded by the EU, has already ventured in the field of subtitling via machine translation, with reasonably good results.

That said, there are still big hurdles to overcome. I remember being at a presentation where one presenter showed examples of automated subtitling for educational videos. The speaker was extremely happy with the ‘excellent’ quality of the result, whereas many of us were taken aback by the poor line breaks and the rather nonsensical spotting.

One key issue with statistical MT in subtitling is that you need lots of quality input to train your engine. The question is, where do you get that data from? If you ask companies to provide their own subtitle files, they will hesitate to give you a hand, since the engine may later help their competitors. Or you can use materials widely available on the web – that is, fansubs – but then, since their quality tends to be poor, the results can be rather disappointing.

Another issue relates to the need of condensation in subtitling, as you must not only translate automatically but also find rules for text reduction. And how is a machine going to decide what parts of the text are important in the context and what parts can be truncated? I’m not sure. Some companies are trying to resolve this problem by having a professional subtitler create a subtitle file from scratch – properly timed, condensed and translated – which only then is automatically translated into other closely related languages. The result is then much better, and subtitlers become essentially proofreaders. As far as I know, they’ve used this approach with some of the Scandinavian languages, and you can also see the potential with Latin languages like Spanish, Italian, Portuguese, Catalan, Galician, or French.

So, still a few hurdles to overcome before we can fully implement MT for subtitling in the right working environment.

MD: Now, an odd question: in terms of subtitling technology, have you ever had an idea that you wanted to test out of curiosity but couldn’t because it was too expensive?

JDC: Not really sure about this one. Everything I’ve ever wanted to do from a professional perspective I’ve been able to manage. Maybe it means that I haven’t been too ambitious. To be honest, I’ve been very lucky in this respect – I’ve always had good connections with the industry, so I got the best software for my teaching and research, even when it was prohibitively expensive. These days, for example, we’ve got two eyetrackers to conduct experiments and, thanks to our good relationship with iMotions, we can also use most of their biosensors for our research. So, overall, maybe I’ve been spoiled in this sense.

MD: Well, you’re in a great position. I myself sometimes have these crazy ‘what if’ ideas. The other day I was thinking, ‘How does subtitling work with a touchscreen monitor?’ You should be able to scroll through the subtitle list, set in- and out-times, navigate through the video and press program buttons by swiping and tapping on the monitor. Sounds fun, doesn’t it? Touchscreen monitors are pretty expensive, though, so I can’t test this on a whim.

Or there’s another thing I thought of recently: ‘What if I had to burn subtitles into an 8K video – which is a huge resolution, 16 times bigger than Full HD – how would I do that?’ 2020 Tokyo Olympics will be broadcast in 8K, so it’s the future and something I need to be prepared for. And, again, testing this would cost a pretty penny.

JDC: Well, in my case, it’s something I’d ask an engineer to do. Yes, there’s virtual reality, touch screens, vector subtitles that you can make bigger or smaller depending on how well you can see, and all these developments are being tested, but, again, in academia we tend to lag behind when it comes to the technical dimension. We’re onlookers rather than the trailblazers. I’m aware of this imbalance and wish it was different – maybe we should be more involved and collaborate with people in computing. I guess we need to work harder at it.

MD: Now let’s talk subtitling software. I’m pretty sure you’ve had the opportunity to use many subtitling tools – which one is your favourite?

JDC: (laughs) When I first started in this field, I subtitled in SWIFT, which was the software used by the company I was working for at the time. It was produced by Softel, which then became Miranda Technologies, but I haven’t used the tool for many years now. At the University of Roehampton, where I started teaching subtitling in the late 1990s, I initially used the free demo by FAB. It allowed you to do up to 20 subtitles, so we would break clips into small segments when we needed to work on longer videos. It was quite messy sometimes. Then I got in touch with SysMedia – now Screen Systems – and we struck an agreement with them that allowed us to employ WinCAPS for academic purposes. Over the years, I’ve got very familiar with this piece of software, and I have to admit that I like it very much. There is still room for improvement, though, especially from a pedagogical perspective, since it can be a bit complicated for students who are just starting off, but I really like it.

In other institutions, the ones that don’t have the luxury of having professional software, I resort to freeware tools like Subtitle Workshop and Subtitle Edit. And these days I’m getting to know better cloud-based subtitling software and platforms, which I think will become pretty standard in the near future.

MD: Have you heard of EZTitles?

JDC: Yes, of course. They are based in Sofia, Bulgaria. I know EZTitles is used in some universities, and people are quite happy with it too.

MD: Yes, EZTitles is pretty great. On an unrelated note, have you heard of the upcoming AV1 video codec?

JDC: No, not really.

MD: It’s being developed by the Alliance for Open Media, which counts among its members some of the biggest tech companies in the world like Microsoft, Google, Amazon and Netflix. It’s going to be an open, royalty-free codec, and if the developers achieve their goal, its compression level will be superior to that of the previous free codecs, so the videos encoded with AV1 will have a much smaller size. What this means is people will have better access to online video streaming: those with a good internet connection will be able to watch videos online at a higher resolution, and those with a poor connection, who couldn’t watch online videos before, will be able to now. So, more video consumed overall and hence more subtitling needed – one consequence of this upcoming release for us subtitlers.

JDC: Most interesting, and a clear sign that technology doesn’t stand still.

MD: Now, my last question: How do you imagine the future of subtitling?

JDC: If I look into my crystal ball, I can see a very promising future ahead – not only for the professional practice of subtitling, with more volume of work, but also for the researchers and academics working in this field. We have slowly but surely inched away from the margins to the centre of the debate, and this can only be good news for those of us involved in such a dynamic and exciting activity.

MD: And this concludes the interview. Thank you very much for your time!

JDC: You’re most welcome (^_^)

bottom of page