Celebrity Interview: Jan Pedersen
Former President of ESIST, organizer of the Media for All conference, co-founder of the Journal of Audiovisual Translation, co-editor of the Benjamins Translation Library, Associate Professor and Head of the Institute of Interpreting and Translation Studies at Stockholm University — Jan Pedersen has countless titles to his name. One of the biggest influencers in AVT academia, he’s a veteran Swedish subtitler, teacher and researcher, and a frequent presenter at international conferences.
Jan was able to make room in his busy schedule to give me a most illuminating interview. We talked about subtitling norms, fansubbing, globalization and ways to bring AVT practitioners and academics closer together.
Below you will find our conversation.
Max Deryagin (MD): Most people know you as a prominent academic, but you also used to work as a TV subtitler for many years. Please tell me a bit about that.
Jan Pedersen (JP): Sure. I started in the late 1990s when I was hired by a company called Language Land. I had no previous education in translation, only in languages, and subtitled from Danish, English and sometimes Norwegian into Swedish. Most of my work was TV shows, with an occasional film now and then. The title I spent most time on was Late Show with David Letterman, I did it for nearly 10 years. We had a whole team for it, since it aired five times a week. Apart from that, I did various English shows like The Simpsons, The Lone Gunmen, Emmerdale and 60 Minutes, and some Danish drama series such as Nikolaj og Julie.
My first job was actually The Young and the Restless, a spinoff of The Bold and the Beautiful, which I totally hated. They gave it to me because that’s what they did with rookies — they put them on daytime TV. Nobody watches it, so you don’t get into too much trouble if you do it badly.
MD: Well, it’s not only rookies. Subtitling in general isn’t as glamorous as many people think — most of our work is mind-numbing soap operas, forgettable documentaries, tedious corporate stuff, and so on. It’s not often that you get to work on something exciting like Game of Thrones or Stranger Things.
MD: But it’s great that you have professional subtitling experience as an academic. I imagine having such knowledge helps a lot in research.
MD: Now, what was the technology like back in the day? What hardware and software did you use?
JP: Oh, I had a rather low-budget setup, I must say. It was still the VHS era. I had twin screen, with a big old fat TV on one side of my desk, and I worked on a demo version of a Danish subtitling tool called TitleVision. And because it was a free demo with limited functions, I couldn’t do my spotting from home — I had to either go to the company office or send my translations off to be spotted by someone else.
MD: When did you stop subtitling?
JP: Hm. I started working at university more or less full time in the mid noughties, and I kept doing quite a bit of subtitling up until 2009. After that it was only a couple of shows a year, and then it gradually trickled to a stop. In the last few years, I haven’t subtitled anything other than teaching materials for my students and small clips for things that we need here at the institute.
MD: In the decade since 2009 much has changed in subtitling — we’ve seen big leaps in technology and major turns in workflows, but these days people say that the AVT industry is going through a massive disruption, a revolution of sorts. What does that mean exactly, and how will it affect everyone in the field?
JP: In short, we’re now having this incredible influence from the Video on Demand industry, particularly Netflix, HBO, Amazon and some other giants. Not only do they do things their own way in terms of the process, but they also follow very different subtitling norms compared to what our local subtitlers and viewers are used to. This influence has shaken the industry and has proven to be quite disruptive. So much so in fact that we’re now seeing a backlash against this trend, particularly here in Scandinavia, where practitioners have started amassing national subtitling norms to counter the onslaught of the American ones coming from the VoD people. The Norwegians were the first to gather a document of how subtitling should be done in their country, the so-called Norwegian model, and got some major stakeholders to back it. And now, just the other week, the Danes did the same, and we’re also working on something like that for Sweden. So, there is some resistance to this revolution of norms.
How will it affect everyone? It depends on whether the backlash will prove successful. The jury will be out on this one in a few years.
MD: Hm. I thought this “revolution” thing was about the novel subtitling technology like neural networks, AI, the cloud, automatic captioning, and so on — that it disrupts our ways, intruding on our methods and transforming our role in the production chain.
JP: Well, that’s part of it no doubt, but what do you think drives all this innovation? It’s globalization — big companies like Netflix going global. They keep expanding, so they’re in constant need of solutions that help ensure cost efficiency, fast turnarounds and consistency at an ever-increasing scale. And where there’s demand, there’s supply — tech people see this opportunity and try their hardest to seize it. I guess it’s a bit of “the chicken and the egg” thing — globalization drives technology, and technology in turn enables further globalization. So, yes, technology is part of it, but it’s only a derivative of the VoD industry’s influence.
MD: Do you see all this new technology as a threat to the profession of audiovisual translator, or is it something to look forward to?
JP: Most subtitlers I speak to, almost all in fact, see it as a threat, particularly those who’ve been in the business for a long time. I personally see it as a development. In some ways it’s good, because it makes you more efficient by automating some tasks, and in some ways it’s bad, because it homogenizes subtitling methods across the world and kills local subtitling traditions. Seeing a tradition die is almost like seeing a language die. Well, not as bad, but you get the idea. So, if you’re not happy with change, you’re going to see this as a threat.
MD: I think we’re mostly afraid of dwindling rates and worsening working conditions, a trend that’s been going hand in hand with technological advances in our field. Some companies manage to keep the bar high despite that, like Nordisk Undertext in Sweden, but most don’t, unfortunately.
That said, nowadays quick delivery has become a must — if you don’t produce subtitles fast enough, impatient viewers will often resort to fansubs and some folks will run into spoilers online. No one wants that, so I think the new technology is crucial in today’s realities.
Fansubbing & Creativity
MD: Speaking of fansubs, I’ve noticed it’s quite a controversial topic both among academics and practitioners. Some call fansubbers “heroes”, some call them “criminals”, but there seems to be no middle ground in the debate. What side are you on, if any — and why?
JP: [laughs] Well, I’m trying to take that elusive middle ground. You say there’s none, but I think I’m at it. Now, “hero” is probably too strong a word, but when fansubs are at their best, they’re certainly beneficial, because they give you an opportunity to enjoy content that hasn’t been — and will probably never be — subtitled professionally. They meet a demand that wouldn’t be met otherwise. That’s how it started back in the day with anime in America, and I think it’s a good thing.
Having said that, they still are criminals in a way. Not all of them all the time, but it has been proven now — and tried in court — that disseminating fansubs is in fact illegal if you don’t have the copyright holder’s permission. And it must be so, because if you’re going to have copyright at all, then the dialogue of the film has to be part of it. I don’t think that creating fansubs is a criminal activity in itself, but making them available to other people certainly is.
And there’s another issue: since fansubs tend to be quite bad — regardless of whether or not they go through some sort of quality control — in the worst cases they can damage the foreign audience’s general view of the subtitling craft and ruin their opinion of the movie, especially if that’s the only available translation.
MD: Come to think of it, that’s exactly what I felt recently when watching a fansubbed anime show that had no official subtitles. The translation was so bad that I thought less of the show than I would have otherwise.
But I think I’m with you in trying to be in the middle. There are different fansubbers with different goals and intentions, both good and bad. In some parts of the world they helped popularize subtitling as a profession — I myself was inspired to become a subtitler through exposure to fansubs a long time ago. Also, they sometimes help fight state censorship by producing faithful translations, like they do in China, for example. It’s a form of political activism, and a commendable one at that. And finally, like you said, fansubbers often work on stuff that wouldn’t be ever subtitled otherwise due to financial, licensing or other reasons — e.g. when it comes to old, rare, obscure films or uncommon language pairs. Sure, the result is far from perfect, but I think it’s better than nothing. On the other hand, some fansubbers are in fact criminals who steal intellectual properties and profit off them, so it’s not all sunshine and rainbows.
Now, as far as criminals, you quite famously were part of the so-called “fansub trial”. Could you please describe for those unaware what it was about, what your role was and how it concluded?
JP: Sure. The fansub trial was a case in Sweden, I think the first one ever, where a website was brought to court for disseminating fansub files without the copyright holders’ consent. The site was called undertext.se — “undertext” means “subtitle” in Swedish — and the lawsuit came from an anti-piracy agency called Rättighetsalliansen supported by the major film distributors in the area: Warner Bros, Disney, and so on.
I want to point out, though, that it wasn’t a trial of fansubbers, it was a trial of the site’s owner who made money from ads and donations. So it was more about file-sharing than fansubbing; it’s just that fansubs happened to be part of it. And one more thing: the site didn’t have any videos — only subs.
Regarding my role, I was called in as an expert witness. They had all these subtitle files from the site, thousands of them, and they needed someone to establish that the subs were in fact translations of the films. So that’s what I did — a linguistic analysis of the files. Now, it was fairly clear that they were translations of the film content, but I took the job more seriously than I had to — I developed a model for assessing the quality of subtitles, which I called FAR, to QC a subset of the files. I had this idea that if the fansubs were terrible, then one could argue that they couldn’t be used for watching the film. In other words, if the quality of these translations were too low, they wouldn’t reflect the content of the dialogue, so there’d be no case. The court dismissed my idea and said that as long as there’s a connection between the subs and the original, then that’s a crime, and I think fair enough.
During the trial, the prosecution showed clearly that the site was linked to The Pirate Bay, because the names of the files matched perfectly those found on the infamous platform, and that was a nail in the coffin. Also, the court established two things: first, that due to the Berne Convention you need the copyright holder’s permission to make translations or alterations of a copyrighted work, and second, that film dialogues have copyright both in their own right and also as part of the film. So, if you translate a film without permission and make the result available to the public, you have in fact committed a crime.
In the end, the site owner was sentenced and made to pay damages. He then went to the Court of Appeals, which upheld the verdict but lowered the payout sum, yet the case never made it to the Supreme Court. Interestingly, the guy was only sentenced for disseminating the files that I had looked at, not all of them, which I thought was quite silly, because the sampling was random, so I don’t see why they couldn’t extrapolate the results. I don’t agree with that. Overall, I think it’s a fairly open and shut case if your country has signed the Berne Convention, and almost all countries in the world have — 176 nations if I remember correctly. Basically, everyone who makes films.
MD: Have you used the FAR model after the trial, in teaching or research?
JP: Yes, I use it in teaching all the time to assess my students’ subtitles. I don’t apply the scoring system, though, because in my opinion quantifying their errors is not very pedagogical. But even without the scores, the model works well for identifying which areas the student should improve. You can say something like, “Your translation skills are good, but the target language needs some work, because you’re making many grammar errors”.
MD: As someone who has researched fansubbing extensively, how do you think film fansubbing compares to anime fansubbing in terms of quality or anything else?
JP: Well, I have studied fansubbing thoroughly but not extensively as you say. I’ve only looked at Swedish fansubs for films. Sure, I’ve compared my results to other people’s findings, but I haven’t done original research on any other kind of fansubs.
Having said that, I’ve read a fair bit of other researchers’ work, and my impression is that anime fansubs differ considerably. They’re much more creative, they contain extra information, they do a lot of fun stuff like motion, graphics, visual effects, and so on.
Another observation I’ve made is that anime fansubbing has lost its pole position — it used to be predominant, but nowadays it’s a marginal thing. Other forms of fansubs in other media and genres have grown to become much bigger worldwide.
Also, when I look at studies in other countries such as Italy, Argentina, China, and so on, I find that my results coincide with those of others, that today’s non-anime fansubs have inched their way closer to professional subtitles — they try more to adhere to professional norms and sometimes even include quality control, which doesn’t seem to be the case with anime.
MD: Speaking of creativity in subtitling, there has already been some research into it in the form of integrated titles and their reception. Do you think this integrated, unconventional approach to subtitle placement and design is the way to go?
JP: I think it’s quite fun, but I don’t see it entering mainstream subtitling any time soon. At least at the moment, integrated titling seems to be too labor-intensive, and it’s a major breach of the biggest subtitling norm — that you don’t call attention to your subs. Of course I see a future for them, particularly in niche genres and at film festivals, and I won’t say they’ll never make it into mainstream TV and film, but I find it hard to believe that it’ll happen soon. Time will tell.
MD: Pablo Romero-Fresco’s Accessible Filmmaking project also touches upon the subject of creative subtitling. For me, the project is quite fascinating, because it seeks to make AVT an integral part of the filmmaking process rather than just an afterthought. What do you make of this initiative?
JP: I think it’s absolutely fascinating — it’s really creative, it’s daring, it’s making subtitles a part of the original, but as with integrated titles, it will probably remain a niche thing. I can see it work for some types of content like big Hollywood movies with a huge budget: if you can catch the director’s attention, it might materialize, because some directors have great interest in what happens to their film abroad — famously, Stanley Kubrick, of course. But for the endless drama series and reality shows, and everything that your TV tabloids are filled with, I don’t think they’re going to bother.
Academics & Practitioners
MD: Now let’s switch the topic and talk about a pertinent issue. There is a disconnect between subtitling practitioners and academics. As it stands, many — if not most — subtitlers either know very little about the results of academic AVT research or find those results not applicable in their everyday work. How can we bridge this gap? And should we?
JP: Of course we should! It’s a pity that the gap exists. I think it’s crucial for the work that we do here in academia to have public support and be openly available and validated.
How do we close this gap? Well, from our academic point of view, one thing we can do is make AVT conferences more inclusive of practitioners, so that they can attend, participate in discussions and share their views.
As you know, I organize the Media for All conference, which will take place in Stockholm in June. It started as a meeting place between academia and business, but my vision for it is to include and give voice to practitioners. So, we’re taking several steps in that direction: we’re adding a workshop on how to form an AVT association in your country, we’re giving a spot to audiovisual translators in some discussion panels, and we’re trying to keep the registration fee low, so that people can afford to come. Some conferences have forbiddingly expensive fees for practitioners, because it’s not a very prosperous trade right now. We have lower fees, and we’ve also partnered with AVTE and some national associations to give their members a discount.
In general, we academics need to be more active in spreading our research results, which I’m trying to do myself via social media. If we don’t tell people what we do, we might as well not do it. And we also need to come up with more initiatives. You and I met in Berlin last year to discuss this very “gap” topic in our roundtable with AVTE representatives and academics, and I think it was a fruitful meeting. The problem, of course, is that such things take time, and we’re all busy people, so it’s hard to find ways of making this extra effort work.
MD: Great points! I will definitely attend Media for All — I think it’s a great opportunity to network, meet industry influencers and keep abreast of developments in the field. And I hope more people like me will come, so that other conference organizers could see that if you make your event more inclusive, you’ll get better attendance.
Now, from my perspective as a practitioner, I see three more ways to fix the gap problem. Number one is to provide easier access to research. Quite often papers hide behind crazy paywalls and infinite mazes of web navigation. For instance, accessing your recent article “Fansubbing in Subtitling Land” was so much hassle that I gave up on the idea — not only did I have to subscribe to a service that I didn’t care about, but I couldn’t even figure how to do that in the first place. No button, no menu, nothing; I wish there was an easier way.
My second wish is for research to be more applied. Right now much of it is too academicky, kind of useless for us practitioners, which makes many of us not very interested in the whole thing. I understand that just because it’s useless to us, doesn’t mean it’s useless altogether, but I wish there was more interest in applied research.
And as far as spreading results, we need an easier way to learn about new publications. Currently they’re dispersed across the web, and it’s hard to find all the research you’re interested in. It’d be great if there was a one-stop website where all new articles and books were listed, weekly or monthly, maybe even grouped by subject or language, with a forum to discuss them. That’d definitely be helpful for everyone involved — for us to find relevant research and for academics to increase their readerships.
I think in this respect creation of the Journal of Audiovisual Translation was a great step forward. It’s freely accessible, well-made, and it’s a great way to broadcast academic findings. I thoroughly enjoyed reading the inaugural issue. Tell me please, what motivated its creation and what are its goals?
JP: Okay, let me first address your points. I certainly agree that things should be more accessible — that’s why we created JAT after all — but unfortunately such approach has very few incentives for us. It’s a bit of a catch-22: you want your papers to have a wider reach, but journals that are freely available online, which more people read, tend to have lower academic ratings, so publishing there won’t help your career as much as publishing in a prestige but paid journal. I myself try to publish a lot online, some articles and also stuff for JAT, but once in a while I need to send things to a high-ranked outlet, which might be paywalled or hard to find, because otherwise I don’t get many points towards promotion. So, accessibility issues are inevitable in the current climate.
This whole system is very problematic, it needs to change — and I think it will. In Sweden there’s a growing movement against it. For instance, my university has cancelled their contract with Elsevier, a huge company that owns most of the academic journals and that charges an arm and a leg for their stuff. It’s so expensive that only university libraries can afford it, so if you publish in Elsevier’s journals, you do get a good amount of academic credit, but your papers won’t be widely disseminated — only a few other researchers will see them.
Some of these journals have a policy that they give you access to their archives, things that were published a few years ago, but if you want the new stuff, you need to subscribe, which, again, only a library can afford. It’s a silly system, I think. I totally agree with you that it’s not a good thing.
Now, about things being more applied. Yes, that’d be useful for you, but the other stuff is more useful for other people and in other ways. They’re just aimed at different audiences. We need pure science, too, so that we can do applied science — we need both kinds.
The idea of having a website with a list of publications — I think it’s a brilliant idea, but to make that work we would need to pay someone to create and maintain the whole thing. I don’t think it’d be possible otherwise, because the rest of us — we just don’t have the time.
But anyway, your question was about JAT, wasn’t it? [laughs]
MD: Ha ha, yes, but what a great answer!
JP: Thanks. So, the Journal of Audiovisual Translation was a few years in the making. The original initiative came from Elena Di Giovanni who was frustrated with how long she had to wait before her articles got published in the established journals. After you submit a paper, the standard wait is about two years.
MD: Good lord!
JP: Yeah, and if you’re doing topical stuff, that’s no good. So, she suggested that we start our own journal of audiovisual translation, because at the time there weren’t any. I mean, there was one made by practitioners, the French L’Écran Traduit, but not an academic one — we had to publish in all kinds of related outlets, like journals for general translation studies, studies in psychology, disability, media and so on; anything but AVT. We wanted to show people that AVT is a field of studies in its own right, which requires two things: conferences and a dedicated journal. We do have many conferences — Media for All, Languages & The Media, Intermedia, etc. — but we didn’t have a journal, and that’s another reason why we decided to start JAT.
So, when I was still President of ESIST, Elena approached me with this idea, which I thought was great, and we started working on it. We got approval of the ESIST board and went into negotiating with a major publisher to create such a journal. However, the publisher couldn’t agree to make it open-access unless we paid a large sum to cover the expenses — either the authors had to pay themselves or we had to somehow raise the money. The sum was enormous, to be honest, and I’m not very keen on authors paying for their own articles to be published, which happens a lot these days in academia.
We discussed this issue at an ESIST general meeting in Berlin, I think in 2016, to see if our association could cover the funding. The members decided that in theory we could back JAT with some funds if it was going to be ESIST-supported, but we didn’t really have enough money, so the publisher thing wasn’t happening. And also, we thought that the journal absolutely must be freely accessible. At the time I was a bit frustrated that our good deal with the publisher didn’t work out, but in hindsight it turned out even better, because we now can make our stuff available online without the back-breaking financial stress.
So, the inaugural issue came out in November. It’s double-blind peer-reviewed, high-quality, as good as a paid journal but for free and in open access; similar to JoSTrans but fully dedicated to media accessibility and AVT.
The problem we’re having now is that we’re doing everything ourselves, which is really time-consuming. We need to get more help on board with proofreading, layouts, technical stuff, etc. If we chose the publisher thing, they’d be doing that for us, but now we have to do it ourselves.
MD: Very interesting! Now let’s change the topic again and talk about standards. I asked Jorge Díaz-Cintas this question during our interview, but I’d like to ask you as well. Are there some conventional, universally accepted subtitling standards that you don’t quite agree with?
JP: [bursts into laughter] Are there any subtitling standards at all that are in fact universal?
MD: I mean what most people believe to be the right way to subtitle, in whatever aspect — reading speed, shot changes, segmentation, etc.
JP: Well… I don’t think there are any universal standards. People do things differently in different countries, different media, different types of subtitling, for different customers and different reasons. Maybe the ones in our Code of Good Subtitling Practice could be called universal, but they are very general, maybe even too general to be really useful. Come to think of it, the most universal standards are the Netflix ones. They’ve more or less the same set of guidelines for almost all their languages, which kind of makes sense for a global company, though I’m not a fan of that.
We have fairly established norms in the so-called traditional subtitling countries, and I think those are quite good; I don’t disagree with any of them.
MD: You recently published a paper about the Netflix subtitling standards. Do you find their Swedish guidelines optimal? If not, what would you change?
JP: [chuckles] Reading speed is certainly one thing. Our Swedish norm is 12 characters per second, and they have 17. Their line length also differs: they have 42 characters against our 38. This isn’t a problem, though, because modern TV screens can handle 42, so it’s a good development.
One of the most annoying things about Netflix subs is their positioning, that they jump up and down unexpectedly. As soon as you have any form of writing at the bottom of the screen, however unimportant — like “Assistant of the producer’s dog walker” — then the subtitles go up, sometimes to the middle of a character’s face. Viewers find that very annoying; I know they do, because they keep telling me so. It’s probably not Netflix’s original idea but rather the way they’ve interpreted certain guidelines. I would definitely change this.
Also, there’s the question of subtitle density, how many subtitles you get per minute. Here Netflix clearly follow the American norms — you get a lot of short subtitles, whereas in Sweden we’re used to having big two-liners that sit there for quite a while. And there are also small things like dual-speaker dashes — the Netflix guidelines require a blank space after them, which is not the Swedish standard.
Some of these things might seem very minor, and to a certain extent they are, but if your subs don’t conform to what people are used to seeing on local TV, then you do away with the ideal of mainstream subtitles being fluent and unnoticeable. Netflix are creating some amazing content, but they’re also making subtitles that draw attention away from it — and that’s an issue.
MD: How do you think the subtitling standards will change and develop in the future?
JP: Higher reading speeds certainly. Partly because Netflix say so and also because the American norms are gaining ground outside Netflix, coming from VoD and DVD as a whole, particularly in countries where subtitling hasn’t been the main mode of AVT: Germany, Spain, France, Italy, and so on.
Beyond that, I think it’s the local initiatives trying to re-establish national norms. We’ll have to see how well that works against the pressure of globalization. These are very exciting times — things are changing and we don’t know how it’s going to end. But it seems we’re losing the battle on reading speeds.
MD: I hope we’ll also see the development of subtitling standards for integrated titles and newer forms of media like video games and VR/360-degree videos. The latter is being actively worked on by a number of teams, including the ImAc project and BBC R&D.
JP: Yeah. Actually there’s this guy, Samuel Strong, who’s investigating subtitles in video games, trying to work out why they’re so different from the ones that you find elsewhere. Do gamers want the same kind of subtitling that we have on TV? Or something different? And why? It’d be quite interesting to see the results of that project.
MD: And now, my last question to you: If you can share, what are you currently working on and what are your future plans?
JP: Well, in addition to preparing the next issue of JAT and organizing 8th Media for All, I’m working on two separate handbooks of AVT. As you know, the Routledge one came out recently, but there are two new books that I’m writing chapters for at the moment. And then I’m working on a spin-off of the fansub investigation: I was given all the materials from the trial after it wrapped up, and they have loads of interesting stuff about fansubbers’ work. So I’m trying to find time to do a sociological study of Swedish fansubbers.
MD: Awesome! All right, this concludes the interview. Thank you for joining me today!
JP: You’re welcome! It was a nice talk!