If you’re a developer targeting mobile platforms, you’ve probably heard about Xamarin.Forms. Xamarin.Forms is a fairly new UI component toolkit that allows developers to write applications that adjust themselves to the native look and feel of Android, iOS, and WP without explicit coding. Xamarin is also known for shipping without a WYSIWYG editor.

Last month, a software engineer from Xamarin commented that WYSISYG editing for Xamarin.Forms was “not currently available, nor is it in the roadmap, but it's definitely something we'd like to do at some point."

The company’s unfirm commitment to WYSIWYG might strike you as remarkable seen

  • Gartner has listed Xamarin as a “Visionary” in its magic quadrant for Mobile Application Development Platforms;
  • The feature to edit your user interfaces visually in designer preview has been a staple of UI development tools since the 1980s.

“1980s?” you may be thinking. Let’s quickly recap 3 decades of design-time visual editing of enterprise UI:

WYSIWYG in the 1980s:

Definitions of byte positions and buffer generation





WYSIWYG in the 1990s:

Procedural code (bidirectional) editing with absolute positioning




WYSIWYG in the 2000s:

Bidirectional declarative (XML, CSS) editing with adaptive layout managers




We might already be halfway through the current decade but whether WYSIWYG will endure in a big way in the 2010s and if so what will be its defining characteristics remains unclear. This uncertainty is highlighted by the following Google trend graph for the term:

Understanding user interfaces

The software industry likes to congratulate itself for the great strides it’s made since Assembler was cool to align the vocabulary of what users articulate as requirements, with what developers write in their programs. As time has progressed developers have been able to concern themselves more with problems expressed in the terms of the user rather than the terms of the computer.

But for a long time, the exception has been user interfaces. Developers well into the 2000s have been happy to store user interface definitions in code formats that were meant for computers and not humans to understand. The only reason this worked was because in most cases, the developer did not have to read or understand the code – they could simply let the developer tools interpret it and display the screens using WYSIWYG. The “code behind” was just for the computer to read.

As UI catches up with programming languages on expressiveness, legacy developers are challenged to think about how we describe and understand space in user interfaces. 

To get an idea of this, try this experiment – imagine a space that contains perfectly cubed-shaped boxes 1m high which are placed at the following coordinates, with x pointing south from the middle of the room, z pointing west from the middle of the room, and y pointing up away from the floor: 

A computer has no problem quickly interpreting and rendering a scene based on this set of coordinate data. Developers, however, will have a tougher time creating a mental image of the scene just by considering the numbers, and users are very unlikely to express their requirements using coordinates for each text element they need on the screen. What a user might be more inclined to write (in the absence of a sketch) would be a textual description like:

Arrange 14 boxes into stacks of no more than 4 boxes high, starting in the middle of the space, in rows of two stacks facing west. When the row is full, start a new row to the north each time leaving 1m space between rows.

A picture is a thousand words, but a word can be a thousand pixels

The textual version of the scene description is easier for most humans to grasp and communicate than the set of coordinates, especially if there is no sketch or other display of the scene that can be quickly referred to. This only works for simple arrangements, of course. If the description can’t use words from our natural language to describe the layout (like “stack” and “row”) then the language loses its compact descriptive power and we gain little advantage over the coordinates.

The decline of WYSIWYG and absolute positioning seems analogous to software developers’ move away from Assembler languages since both work toward the removal of the hardware layer as an item of concern: Mainstream developers of today are turning away from static pixel references as they turned away from register operations in the 1970s.

Herein lies the danger for legacy developers: The fading importance of WYSIWYG and absolute positioning comes at a slow revolution in enterprise UI development where layout managers and spatial vocabulary are increasingly used. Static screen layouts that might appear simple to maintain using a visual designer can become impractical to convert technically to many modern UI frameworks.

Consider a screen from the 1980s with a static layout like this:


Automated modernization of this screen in a way that preserves the layout but records it in a textual format that does not use WYSIWYG will probably fail from a maintainability perspective. There is no straightforward way to describe this layout using the words stack, grid, flow, orientation, aside, column, row, or cell.

Advice for legacy developers? Next time you place a piece of text on a screen, consider if you can describe in a sentence what the location is of that text. If you can’t think of a word to describe its position, you’re probably depending on hardware more than you realize.