Oddly enough, the Nemeth code in ASCII for the “is greater than or equal to” symbol is not >= (as we have come to expect in programming languages). It is instead .1:
… but if you look at the ASCII, you might go, “What?” And it just depends. If you’re an ASCII person, that makes all the sense in the world. I’ll tell you, to me, that drives me crazy.
Perhaps that is near the root of the Julia and braille problem. Some Nemeth braille users prefer to edit braille cells and others prefer to edit ASCII even though they are both trying to create the same print-ready math equations. A solution allowing them to collaborate easily (i.e., without driving each other crazy) on a lengthy math derivation involving a sequence of dozens of math equations I think would be the same solution that would allow a person editing Nemeth braille cells and a person editing Nemeth ASCII and a person editing with a Unicode capable editor and a person editing with a Unicode incapable editor all to collaborate easily (i.e., without driving each other crazy) on a Julia source code file.
As a first pass guess at a solution, how about a no-frills lingua franca section of code that Julia can execute followed by a comment block that defines the specific substitutions necessary to convert the lingua franca section of code to the user’s preferred editing format.
Making version 0 more manageable, I’ve reduced the number of display pins by a factor of four. Now, the design has a total of 576 display pins in a grid of 18 rows and 32 columns (or 6 rows and 16 columns of 6 dot braille cells) as shown in the following mechanical drawings of the top and bottom surface.
According to its Wikipedia page, “It is the only eight-dot alphabet listed in UNESCO.”
The bottom row’s two dots are used as shift and ctrl keys that operate on the well-known 6-dot braille cell. Being able to shift to upper case letters in a single braille cell seems like it could significantly shorten braille source code that uses a lot of capital letters.
The numeric characters are separate from the alphabet characters. Not requiring a leading cell to differentiate between a string of numbers and a string of letters seems like it would be easier to read (not having to remember the state of a prior switch to number mode).
The bottom row ctrl dot is currently used to encode the accented characters Ä, Ë, and É but for Julia code I think it would be more appropriate for that ctrl dot to cause a switch to the Greek alphabet.
I have not yet tried to fit all of Julia’s symbol operators into this 8-dot cell, but certainly it could cover a lot of them. For example, perhaps ctrl-’ = ` (i.e., ctrl apostrophe = backward apostrophe).
It seems as if a couple types of translations (implemented in the display device) would be very helpful:
contractions of long strings of julia text that have little information, and
translation from other braille encoding schemes (e.g., when collaborating with another person who prefers to use a different type of braille).
Concerning 1, naively displaying the entire REPL prompt and indenting would waste a lot of the display’s very limited braille cells. (With 8 dot braille cells, the proposed display has just 4.5 rows of 16 columns of braille cells.) For example in this code, all four braille cell rows are needed to display “Hello, World!” because of the lengthy prompt and indentation.
Contracting the REPL’s prompt and indentation would save a lot of display space. On a related note, does anyone know of a reasonable estimate of the frequency of use of Julia’s keywords? It would save display space to define contractions for the longest and most frequently used Julia keywords. (I Googled julia keyword frequency of use and got plenty of links to text analysis Julia applications, but I didn’t see any that were self-analyzing Julia’s use of keywords in a reasonably large corpus of Julia source code.)
My idea to reduce the width of the proposed display from 64 display pins to 32 display pins (= 16 braille cells) is a failed idea. After seeing how complicated it would be to write the simple
in lines that are only 16 characters wde, the width of the proposed display is back to 64 display pins (= 32 braille cells), which is enough to hold println(“Hello, World!”) in one line … bringing the total number of display pins up to 1152 (= 64*18) but significantly improving the proposed display’s Time To Hello World (TTHW) metric.The resulting 32:9 aspect ratio also might improve literary accessibility by being able to simultaneously display a tactile graphic along side its braille caption. Here is the updated mechanical drawing of the display surface, with strings partitioning the display into 8 dot cells.
Here’s an early attempt at a Luxembourgish/QWERTY/Julia (LQJ) table: an approach to encouraging succinct Julia code when it is written in braille (because refreshable braille cells are scarce) by mapping multi-byte Unicode characters to single-byte Unicode offsets. The left side of the table representing English and Greek alphabets looks good. (To see details, zoom in.) The right side of the table, representing the numbers and operators and miscellaneous letters from other languages, still needs a lot of work. The Julia code that generates the table is in the function lqj() in the appendix of this essay about the proposed tactile display.
Using only lower case characters as the index into the above LQJ table leaves 17 columns of the table unused. I’m wondering about using Julia’s string interpolation syntax to access those 17 columns. For example, perhaps $i could represent a single byte contraction of "if " and $I a single byte contraction of "elseif ".
Any thoughts on the pros or cons of using Julia’s string interpolation this way?
It doesn’t sound like you want string interpolation. It seems like you want a completely different abbreviated surface syntax that maps to a Julia backend. Which is totally fine, but you’ll need to implement your own parser.
But I would get significant input from visually impaired programmers first, and up to now that seems to be absent from this discussion?
This stackoverflow thread is particularly relevant here — it includes comments from several professional programmers who are visually impaired, and a summary quote is:
The majority of blind computer users and programmers use a screen reader of some sort.
i.e. an audio interface, sometimes supplemented by Braille.
There have also been a number of past works on programming for the visually impaired, and it seems that virtually none of them focus on Braille:
Yes, it’s time to get a sort-of-working (SOW) prototype in front of people.
When I started thinking about this project a couple months ago, what intrigued me then (and still does) is that it is a cheap rickety collection of wood, bamboo and string that opens the gate to a lot of sensing and processing capabilities because it is overlooked by a smartphone.
I think it has a pretty good chance of helping with the problem of creating tactile graphics and maybe even making them interactive. It might also be able to help scale the work of braille teachers who are in short supply as described in “The Braille Literacy Crisis in America”.
My guess is that whether blind programmers will prefer braille or a screen reader will primarily be a matter of speed, whichever allows them to move their focus to different lines fastest will be preferred. And I don’t think they’ve yet had a fast braille option. As described here by the American Foundation for the Blind, there are 18 different refreshable braille products made by 10 different companies but they all are limited to displaying just a single line of braille at a time. (How did that cluster happen?)
I’ll post to this discussion when I have an early prototype in hand. Meanwhile, I plan to continue posting notes about progress to this essay … mainly as an organizational aid for myself.
It is relevant but a little dated. A more recent view about assistive technologies for the blind is in the Jan 2022 research report “Workplace Technology” by the American Foundation for the Blind. On page 33,
In an open-ended question, participants were asked what additional accommodations
they believed would allow them to perform their work responsibilities more efficiently.
The most mentioned accommodations were:
• multiline braille displays or other improvements to the smooth functioning of braille
• artificial intelligence (n=18)
• some form of smart glasses (n=17)
• an indoor GPS (n=9)
• some form of visual interpreting service (n=7)
And on page 50 of the same research report,
When asked about technology they would like to see developed that would increase
their productivity, participants responded that they would like companies to create
content and technology that is accessible from the onset. They would like to see
the enforcement of laws related to accessibility and improvements to products,
such as better OCR software, more enhanced voice controls, and affordable multiline
The highest risk design seems to be the inch long ratchet mechanism (shown in the animated gif).
To quickly iterate on that ratchet mechanism design and make a few prototype braille displays, I ordered a high speed 3D printer 10 weeks ago and it is scheduled to be delivered in November.
I’m starting to think that much of the braille display’s display pin layer can be made out of recycled materials.
In February, I ordered a raspberry pi to control the pushing of the display pins in a prototype braille display. The part was back ordered and was finally delivered last week. The packing list had the following notice: “These items are controlled by the U.S. Government and authorized for export only to the country of ultimate destination for use by the ultimate consignee or end-user(s) herein identified. They may not be resold, transferred, or otherwise disposed of, to any other country or any person other than the ultimate consignee or end-user(s), either in their original form or after being incorporated into other items, without first obtaining approval from the U.S. Government or as authorized by U.S. law and regulations.” I guess the export controls make sense but I was not expecting a part from the UK to be subject to US export controls.
The high speed 3D printers I ordered had a one month manufacturing delay in China. So their estimated delivery date is now in December. I am looking forward to 3D printing the ratchet mechanisms that will hold up the display pins as the scan line traverses the display.
I have reduced the display pin diameter to 3 mm resulting in a tactile display resolution of about 4 dots per inch, which would be dreadful for a printer but perhaps is acceptable for a low cost tactile display.
Stanford University’s Shape Lab has demonstrated a prototype of a tactile display, but the parts alone cost $6000. So it has about the same order of magnitude cost as the currently available 80 character refreshable braille devices that cost $15000. They both use expensive materials and technology.
In contrast, it seems possible to make a tactile display that costs a factor of 100 less by using really cheap materials and technology that are configured by good designs in geometry. (After all, a tactile display is just a bunch of small geometry problems.) For more about the development of good design in geometry, see this post.
I’ve started 3D printing parts. The AnkerMake M5 has turned out to be a good choice because it is fast and reliable, which will be needed when 3D printing the ratchet mechanism for each of the 2304 (= 64 x 36) display pins in a single tactile display.
My first goal is to make kits that convert an empty roll of toilet paper into a demonstration of a tiny tactile display with just 16 display pins. The working demonstration will show how the ratchet mechanism holds a shape and then erases it when turned upside down (similar to an etch-a-sketch but for a raised surface).
I’m “eating my own dog food” by using Julia to create the models for all the 3D printed parts. The typical approach to creating 3D models is with a lot of manipulation of a GUI within a 3D modeling application. However all that manipulation of a GUI is not accessible to blind programmers who would be capable of designing and implementing their own 3D models given the tools to do so. If you are interested in seeing this Julia approach to creating 3D models, I’ve posted an outline in this essay. Basically, the approach is to generate a mesh with reference indices to groups of vertices and faces. If you try that approach and create an interesting 3D model, please post an image of it here.
It seems the tactile display will be associated with three different rates of change:
the design details within each of the display’s three layers (i.e., display pins, robotics, smartphone app) are expected to change very quickly,
hopefully, the rate of change of the interfaces between those layers will be much much slower, and
slowest of all, the target of significantly decreasing the chronically high unemployment rate among the blind will be very difficult to change. Ideally, blind people will make and repair tactile displays for other blind people. In general, blind people need many more accessible tools to make many more things than just tactile displays. One possible tool is an accessible CAD system as described in the previously linked essay 3D Modeling for the Blind - lacking a GUI but gaining the concise and expressive power of category theory.
know the history - The Braille Authority of North America posted the essay The Evolution of Braille that describes how braille has slowly changed, including the difficulty of adding new braille symbols due to conflicts with duplicate braille symbols in other braille codes. That problem seems very similar to the problem of conflicts in different software modules due to duplicate global variable names.
lower the cost - In order for a tactile display to efficiently display software source code, the display must have thousands of display pins, which means that each display pin must be dirt cheap in order for the overall tactile display to be affordable. This essay about strategy proposes discarded paper as the display pin material.
contract the code - Even with thousands of display pins, it will be a challenge to compactly display software source code on a tactile display. Grade 2 braille uses contractions of common words and common groups of letters in an attempt to compactly display braille (e.g., a ‘d’ standing alone is a contraction for the word ‘day’). This approach to compactness is a bit of a shot in the dark given that the most common words and groups of letters will be different in different documents. Perhaps Julia’s metaprogramming capabilities can improve on that approach by implementing “local contractions” limited in scope. For example, perhaps there can be one set of contractions for all of Julia’s keywords and another set of contractions for each function’s variable names.
Yes, audio is very important. The specification for tactile graphics is over 400 pages long but it seems it could be so much simpler by integrating audio (e.g., play audio description of a diagram’s feature when you point to it). Perhaps similar benefits could come from integrating audio with a braille display of source code, where the braille would be good at showing indentation and nesting of parentheses and the audio would be good at expanding contracted text.