Fonts for Indian Scripts
|Ready to kick-off|
|App developers (Web/iOS/ Android),|
Stylised text written using customised fonts plays an important role in improving the impact/readability of advertisements, signboards, presentations, reports, etc. Artists use their creativity to design such fonts which are visually appealing and compatible with the color, texture, size, etc. of other elements in the background. This is a creative and time-consuming process. While many such fonts are available for English, very few are available for Indian languages. The goal of this project is to bridge this gap by building an AI agent that can generate glyphs in a given language such that they have the same style, decoration, color and texture as a given font in English. More specifically, the input to the model will be an English text written using a certain font and the output will be images corresponding to all glyphs in the target language in the same style as the English font.Why this is relevant in the Indian context
Why this is relevant in the Indian context
India has 22 constitutionally recognised languages with a combined speaker base of over 1 billion people. Though India is rich in languages, it is poor in resources on these languages. This primarily applies to availability of corpora for Natural Language Processing, but also extends to available support for authoring in Indian languages. The latter enables the digital usage of Indian languages and in turn provides opportunity to build large corpora.
On one hand there is this lacunae of proper authoring tools for Indian languages and on the other hand there is a rapid increase in the number of Indian language users on the internet. This growth is fuelled by the availability of affordable Indian language enabled phoned coupled with the widespread mobile connectivity even in the remotest parts of India. Indeed, a 2017 report by Google-KPMG titled “Indian Languages - Defining India’s Internet” predicts that by 2021 75% of Indian internet users will use Indian languages (see Figure 2).
Already in 2016, 234 million people were engaging in local languages while the number for English was 175 million. This has lead to an increase in the demand and corresponding supply for original content written in Indian languages. This content is typically in the form of social media posts, news articles and advertisements in local languages. To improve the impact, readability and appeal of such content, it is important to create a wide variety of fonts in Indian languages which will provide more creative options to authors to better style their articles. This is especially important in the context of blogs and advertisements.
While several such fonts are available for English and other popular languages, very few fonts are available for Indian languages. Hence, there is a clear need to create more fonts for Indian languages. Given the creative nature of this task, it makes sense to not reinvent the wheel but simply adapt the style, color, texture, etc. from English fonts and use them to create glyphs for Indian languages. The focus thus shifts from creative designing to adapting/copying by training a machine to do so.
Data availability and collection
What kind of data do you need for building such a system? To answer this question let us understand the task clearly with the help of an example. Given “A” written in a particular style we want to be able to write the Hindi alphabet “क” (ka) in the same style. Now suppose we have access to some fonts which supports both English (Roman script) and Hindi (Devanagari script) as shown below.
Using such data we could learn correspondences between Roman and Devanagari script. At a high level, both these scripts would use horizontal lines, vertical lines, slanting lines and curves to render alphabets. In other words, even though the scripts are different there is some commonality at atomic level. We can design an appropriate neural model to capture these commonalities. Now given a new font, the model needs to answer the following question similar to what a human would in this situation: Given that I already know how “a-z” and “क, ख, ग,…” are written using Font1 and now that I know how to write “a-z” using Font2, can I figure out how to write “क, ख, ग, …” using Font 2 ? To answer this question the model will exploit the commonalities that it has learnt between the two scripts using Font1 and the additional knowledge about Roman script from Font 2.
To train such a system we need access to a reasonable number of fonts which support both English and Devanagari. We could then use pairs of these fonts and train the model using the following recipe:
The model first learns the atomic commonalities between Devanagari and English scripts using Font 1
The model is then shown English scripts from Font 2 and trained to produce Devanagari glyhs using an appropriate loss function. We can do this because we already know what Devanagari glyphs looks like in this Font but we are not showing this information to the model. We are instead training it to figure this out.
In summary, to train such a model we need access to some fonts which support both Roman and Devanagari scripts.
existing work - Research and Practice
There is already some work on font style transfer for English where the setup is very similar to this project. In particular, the setup is of few shot learning where given a text containing only a few English characters, the task is to generate the remaining English characters in the same font. For example, consider that the PR team of a movie has created some very stylised fonts for preparing a promotional poster. However, this poster would only contain a few English characters as required for the title of the movie (say, “Avengers Endgame”). The idea is to look a the available characters (A, v, e, n, g, r, s, E, d, a, m) and then render all the missing characters in this same. This setup is very similar to the our setup where we have access to all the English characters written using a font and want to generate the remaining missing characters. The researchers paper that are relevant for this work are listed below
“Image-to-Image Translation with Conditional Adversarial Networks” : Here the idea is to learn how to translate the characters A-Z to “क, ख, ग, …”. This is a very basic model for this task and is adapted form a model for other image translation tasks (such as sketch-to-image, day-to-night and so on).
“Muti-Content Generative Adversarial Networks for Few-Shot Font Style Transfer”: This model contains two sub-networks. A GlyphNet for generating the glyph shapes and an OrnaNet for copying the style and ornamentation from the source characters.
“Separating Style and Content for Generalized Style Transfer” : This paper proposed a model for generalised style transfer to generate previously unseen glyphs with a given style. The model uses a separate style reference set and a content reference set. While the content reference set contains the same glyph in different styles, the style reference set consists of different glyphs of the same font family. The goal is to produce a new font style for the glyph in the content reference set.
“TET-GAN”  : This is another model (Text Effects Transfer) which can stylise or destylise a glyph.
Autoencoder guided GAN , DCFont : These papers focus on generating fonts for Chinese and maybe relevant for this project. contributed in better understanding of the font generation problem.
Open Technical Challenges
Below we list down some open challenges that need to be addressed in this project.
Most of the existing work on style transfer focuses on English where a glyph corresponds toa single character. In contrast, in English a glyph can correspond to a single character or a combination of multiple characters. For example, चा is a glyph in Hindi which is a combination of च and ा. Similarly, त्वा is a combination of multiple characters. Due to the multiple such consonant-consonant and consonant-vowel combinations the number of glyphs in Devanagari is very large (of the order of 600). Transferring style form a few known glyphs (a-z) to a large number of unknown glyphs is going to be challenging.
The above situation (of having multiple glyphs) also calls for creative loss functions. For example, consider त्मा and त्वा where the half-त at the beginning is common. Can we define some explicit regularisers which ensure that even though these two glyphs are different they should be consistent in the way they render the half-त. Designing such loss functions and regularizers which are specific to Indian glyphs is also an open challenge.
Another challenge is the lack of proper metrics for evaluating the final generated samples. There is no way to tell if the generated sample does or doesn’t belong to one of the font families used for training the model. All evaluations need to be done manually. It would be interesting to see if one of the research outcomes could be a metric for evaluation font generation.
We envision the following milestones for this project:
Reproduce the results reported in ,  and  for style transfer for English.
Curate a font dataset for Hindi
Replicate existing style transfer models such as , ,  for Devanagari scripts and analyse the shortcomings of these models.
Based on the above observations propose changes and design new models, loss function for style transfer to Indian scripts
Repeat steps 2-4 for two more Indian languages.
(Since this project has a significant research project, at this point we only have clarity about the initial steps. The results obtained after executing the above steps will define future milestones.
Current Team and Open Positions
AI4Bharat is a collaborative platform. The team brings in a diverse pool talent from students, college graduates, and working professionals with complementary skills. Ishvinder is a final year Undergraduate student, and Teaching Assistant at One Fourth Labs. Shivam works as a Data Scientist at Quest Global, Pune. Saurabh graduated from IIT (BHU), Varanasi and is currently working for EXL Services. Adeetya works as a full time Research Intern at IIT Hyderabad. Rama Krishna works as a Research Intern at IIIT Delhi. Our “Fonts for Indian Scripts” team is being guided by experts in the field of Artificial Intelligence - Prof. Mitesh Khapra and Prof. Pratyush Kumar from IIT Madras.
We are also looking for college students who wish to learn teamwork and in the process develop new skills. We are also looking for a Web/Android/IOS Developer who can put our Generative model to use and make it available for people by deploying it on a Website or as a WebApp or as an Android/IOS Application.
 Philip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A. Efros, Image-to-Image Translation with Conditional Adversarial Networks, 2017
 Samaneh Azadi, Matthew Fisher, Vladimir Kim, Zhaowen Wang, Eli Shechtman, Trevor Darrell, Muti-Content Generative Adversarial Networks for Few-Shot Font Style Transfer, 2018
 Pengyuan Lyu, Xiang Bai, Cong Yao, Zhen Zhu, Tengteng Huang, Wenyu Liu, Auto-Encoder Guided GAN for Chinese Calligraphy Synthesis, 2017
 Yue Jiang, Zhouhui Lian, Yingmin Tang, Jianguo Xiao, DCFont: An End-To-End Deep Chinese Font Generation System, 2017
 Yexun Zhang, Ya Zhang, Wenbin Cai, Jie Chang, Separating Style and Content for Generalized Style Transfer, 2018
 Shuai Yang, Jiaying Liu, Wenjing Wang and Zongming Guo, TET-GAN: Text Effects Transfer via Stylization and Destylization, 2018