I am developing a PDF Library for doing some simple tasks like extracting texts and PDF file contents and attributes. The library is written in pure Julia (save some dependencies on some filter libraries). I am open to anyone interested in reviewing and contributing to it:
Thanks for tackling this, a pure julia PDF parser will be very useful. I hope this can eventually lead to a writing library as well (but Iāll understand if that is not of particular interest to you). Iām also hoping that we can get some higher level tools on top, things like Tabula for example. I am very interested in this, and will play around with it, but unfortunately not sure how much time iāll have to contribute seriously.
Tabula should not be as difficult, although every structured text processing in PDF is some form of heuristic as document structure is not mandated in PDFs. Acrobat had a table picker way back in 2002. So I am assuming it may not be hard to implement. Extraction of all the forms of text is definitely of my interest and will ensure APIs for the same are available. However, I may leave the subsequent heuristic development for table picking for someone to invest focused time and effort in that direction.
I will add an issue in the project for tracking this requirement.
Very true!!! The biggest issue is PDF creators generate files that are non-compliant with the spec. Many a times you have to give the creator higher precedence over the spec based on your customer.
I am now kind of finalizing the v1 of the APIs for the PDF library or the core of the PDF reader library. Here are the initial benefits of the library.
It will allow you to read through a PDF file and create objects which can be used for further access to the document.
It will also provide you the details of the content in every page and create a tree like data structure of PDF page contents which can be used know what is there in the PDF document.
The library has been tested with about 800+ text based files (12000+ pages) so fairly robust in text objects. And the parser is fairly robust and a bit non-tolerant as a standards based file is given higher emphasis.
However, next steps to extend the library requires specific domain where it will be used. For example, in the text extraction itself here are some standard challenges:
PDF text do not have reading order of character appearance. So text may appear as āaliuJā with each character location in such a way printed such that the visual output is āJuliaā.
Text and graphics directives can be interspersed. So you may get 5 different text objects as each character.
Since, fonts can be sub-setted āJuliaā may be printed as (uvwxy) with gyph code of embedded font-51. One needs to query these judiciously with several logical smart reasoning to get the actual text.
Every such reasoning is subjective to the needs and interpretation of the developer/user and can be challenged with an alternate viewpoint. Hence, itās important to keep the low level APIs simple and minimal such that any advanced development can be carried out on top of the minimal API set.
After some thoughts I realized I will rather keep the base APIs simple and minimal. Thus providing more flexibility to developers to develop more advanced solutions they need.
Of course there are a few areas in the basic APIs that are missing currently:
Enhancing the documentation of the library.
Support for encrypted PDF
Support for image filters. This has been knowingly avoided as most people may be using a third party API to render the final graphics. They could send the encrypted image in JPEG or JPX or LZW (TIFF, PNG, GIF) formats than decompressing and sending raw image to the rendering API.
Standardize the tree iterator with AbstractTrees APIs.
Develop what is needed as the adoption of APIs increase.
If you are all in agreement with my approach, I will register the PDFIO to Julia Package so that itās available for general usage and testing.
Supports unicode code extraction from font encoding as well as unicode CMap. (does not read into the fontās internal encoding)
Supports Adobeās encoding for latin fonts.
Does not do any special handling for tagged PDFs but tagged PDFs may behave better as the creation order and reading order of document objects are similar.
A new pdPageExtractText method is introduced which does a cleaner text conversion for complex PDFs including non-tagged PDFs.
Bug fixes
Text conversions carried out on 25,000+ files.
The master untagged version has also some heuristics for text extraction when space character is simulated through text positioning. A few documents of 1000+ pages have been used for text extraction testing as well.
I noticed that the function names that are part of the API donāt follow the Julia naming conventions, but other functions do. For example you chose a name like pdPageExtractText instead of something like pd_page_extract_text. Was this motivated by wanting to be consistent with PDF API naming conventions used elsewhere, or is this merely a historical accident?
The convention is very similar to what is used by Adobeās PDF Library and many other libraries used in the industry in general.
Secondly, Julia does not have a convention for exported methods. Only exported methods in PDFIO follow this convention. Internal methods follow the underscore notation.
I think it might require some more advanced PDF familiarity than what I have. Rās pdftools::pdf_text had no problem with the file. I was able to āfix itā, by running ILovePDF compress on the file and then it works just fine. The file was generated on a Macbook Pro using save as from a cfm file. Do reach out for more details or if you need the exact process to help track down the issue.