First off, WYSIWYG HTML editors exist and have existed for years.
The author is glossing over the biggest issue. Nobody writes web pages like they use word. Nobody just types up an html page from <html> to </html> and then moves on to the next page—WYSIWYG editors would be great at that.
The author mentions "static site generators", and that is the crux of the issue. Nowadays HTML is usually built by templates, sometimes even transpiled from something else entirely (HAML, etc). There's still markup but it's surrounded by template directives which can be fairly complicated turing complete languages in and of themselves (Template::Toolkit and HAML both will let you drop down to the parent language if you need to).
Even if there were some sort of nice, usable, visual programming language, WYSIWYG would still be problematic since you would have to show the loops and conditionals and such superimposed with the HTML (otherwise how can you edit something that isn't shown) and as soon as you see the structure of it all, it isn't WYSIWYG any more (by definition).
The fact is, we in the hacker culture don't embrace WYSIWYG and hang on to our terminals because we're stubborn old codgers, it's just that we've experienced both and we know which one is better.
You haven't addressed the main flaw of WYSIWYG HTML editing which is that what you see is not what I get. There are so many things that can change what I see. Which is kind of the point of HTML - you don't know what browser I use, or OS, or screen size, or if I even have a screen.
I accept your point that WYSIWYG could be powerful for website creation, but web-searching for the <meta name="generator" content="$FOO"> for the various WYSIWYG tools doesn't return good examples.
I think the author is rather talking about <textarea> vs rich text editor widgets for user supplied content, and not really about things like Dreamweaver.
Ten years ago I worked for a web dev company that embraced rich text editor widgets for user supplied content. It was great. We charged them an arm and a leg for the ability to edit their content, then we charged them more to fix it when they broke it or couldn't do what they wanted, which was always.
Subsequently, when I'm doing websites for myself or for people who I like, I don't use rich text editor widgets.
It's possible I'm stuck in the past, but it's not an ideological issue.
The author is ignoring the elephant in the room: WYSIWYG is for static, single format output. It doesn't define behavior.
Even after so many years, MS Word is still only suited for print and static digital facsimiles of print documents. It fails miserably at anything else it tries to do.
Fuck, WYSIWYG is not even suitable for creating good multiple resolution purely static output on nearly identical devices. For the simple reason that if you want to get that right, What You See Is Not What Everybody Else Gets.
WYSIWYG only has value in tight, never changing constraints, and those are actually disappearing more and more in favor of interactive, fluid forms of output.
WYSIWYG is not the future, it's a relic from the age of print.
If there is a way to make the power of raw code more user friendly and accessible, WYSIWYG isn't it. The whole acronym suggest something that doesn't exist and/or is utterly undesirable in a digital world.
HTML doesn't define behavior either, but it can be combined with tools that do (scripting languages), in the same way that WYSIWYG tools can. That's not a valid argument to prefer textual over visual representation.
I picked up an allergy for WYSIWYG while reading through the works of Engelbart. It's not flexible enough and locks you down to a way of think which does not properly leverage the advantages of our new medium; related:
>Our approach was very different from what they called "office automation," which was about automating the paperwork of secretaries. That became the focus of Xerox PARC in the '70s. They were quite amazed that they could actually get text on the screen to appear the way it would when printed by a laser printer. Sure, that was an enormous accomplishment, and understandably it swayed their thinking. They called it "what you see is what you get" editing, or WYSIWYG. I say, yeah, but that's all you get. Once people have experienced the more flexible manipulation of text that NLS allows, they find the paper model restrictive.
We weren't interested in "automation" but in "augmentation." We were not just building a tool, we were designing an entire system for working with knowledge
> Question: Isn't it great the we now have WYSIWYG technology so what you see is what yo get. You can print it out, you have wonderful, good looking documents with all kinds of typefaces?
Answer: That really is nice for the people who want to stay where they like used to be... That ignoring hugely all the other options you have...
If one is capable of holding the entire conceptual abstract ephemera relating to the concrete problem you are solving, then WYSIWYG is mentally jarring as it often takes away from proper semantic representations of concepts. Indeed very often WYSIWYG attempts to hide semantic meaning from users in an attempt to make it seem easier. Of course for the simpler case this is true. But for complex cases where structure and uniformity and restructuring are important use cases then WYSIWYG is destructive compared with a much simpler markup.
I'd also argue with little evidence that there is something mentally jarring for the competent about self expression when having to await moving your arm to do something with a mouse in the midst of the flow of editing.
I think hackers (in any field) like "productive" user interfaces, everything else is irrelevant. There are productive WYSIWYGs and GUIs, look at e.g. Blender, but they are usually hard to learn and/or they look really ugly, so they are not appreciated by commoners or designers.
For the UI to be productive, it has to have a couple of features:
- Highly configurable, possibility of presets
- Accessible only by keyboard
- Available for scripting
You can combine those with smooth learning curve and nice design, but it's very hard thing to do. That's why application builders (on either side) usually don't bother.
That's pretty much the entire reason. Plain text is easy and maps 1-to-1 with the keyboard you have right in front of you. And even when you start binding commands to keys, there isn't too much choice to be made.
Making good complex GUIs is really hard and it seems to have the same kind of inherent complexity like programming - you can't simplify it past giving the developer a really shiny tool with which to put buttons in widgets really easily, which doesn't at all mitigate the hardship of designing exactly where the buttons go, what they say and how they act.
I strongly disagree. The author is ignorant of the issue, but instead of realizing what he does not know about, he creates an entire narrative with the elements he has.
The problem about web interfaces is not programmers, it is that: You have absolutely NO CONTROL on the ending display.
That's it. You have not control on aspect ratio, on resolution, on how fast the computer is and even in the link velocity.
So if you are a designer and something WYSIWYG works so well in your retina display screen and fiber optic Internet connection does not mean that works out there on a myriad of different user choices.
WSYIWYG is a terrible solution when you have different alternatives. E.g If you have 6 displays aspect ratios(4/3 16/9 the new surface pro 3/2 and the portrait version of those), 4 main resolutions, 3 main Internet velocities, and 4 computer power(based on how old the computer is), you have:
6x4x3x4 = 288 possibilities.
Now, have fun testing at least 288 different versions of everything you do (there are other possibilities like people with disabilities that in some jobs like gobertment work you have to do), while the Unix Hackers, and children eaters Gatekeepers just do it once.
I for one want to improve the system, but text(code) is one of the best solutions we have so far. The reason there is no something better is because people are able to spend a significant amount of time ranting about others instead of doing something about it(and facing lots of problems that ranting does not have like the possibility of failure as we underestimated those while on our arm chair).
As I see it, WYSIWYG works great if you create a non-interactive thing, like a document, image or video. As soon as you want to present the same information in several formats, or show headlines of articles in several places you need to abstract those things. None of this is impossible with WYSIWYG, but it is usually much more work than just having some kind of plain-text at the core.
On the other hand, a well-executed WYSIWYG editor with a well-defined abstract document model would be a great thing. The abstract document model is really essential (and probably plain-text based). The minute you let people WYSIWYG edit, they start doing formatting oriented things such as add spacing using actual spaces. It looks the same on their screen, but might break other formats of the same information. If you tell them to stop doing "that kind of thing", then very soon they are essentially doing plain text-editing with a WYSIWYG tool.
> they start doing formatting oriented things such as add spacing using actual spaces
Then perhaps the space key on the keyboard should be tied to adding spacing, rather than space characters. That's really a failing of the WYSIWYG editor to take into account normal computer peripherals. The user wants space. The user has a large board with keys on it with a huge "space" key. The user presses the large space key and gets space. A good WYSIWYG editor should account for the fact that my keyboard has keys on it that aren't letters.
Once you leave design and layout to non-designers/non-layouters, they will ruin everything, and you will get angry calls from the companies' designers, complaining about how their beautiful design was ruined by someone using Comic Sans on a header. I've seen it happen. This is why CMS makers know that editors / article writers should not be allowed to design/layout pages.
Ideally, a WYSIWYG editor in a CMS should only allow users to use bold, italics, links, and perhaps insert small images. Anything more than that, and your web pages end up looking like MySpace/GeoCities.
There's no reason why designers and ''layouters'' couldn't use WYSIWYG tools nor should be forced to use plain text in order to design the visuals of the site. Your argument is against letting end-users control the whole process, but it's not an argument against WYSIWYG per se.
I see what you mean. However, given the large number of different CMSes, I'm not sure designers would appreciate having to re-learn how to design every time they use a new CMS. Right now they use a relatively small "default" set of tools, with PhotoShop being the chief one.
I'm not a designer, but my feeling here is that designers wouldn't appreciate the workflow you're suggesting. Any designers here who'd like to weigh in?
That's why design tools in visual CMS should be integrated with the current methods that designers use in specialized apps :-)
Who says you can't merely reuse the same basic workflows that have been refined by professional designers, augmented with general-purpose mixed visual-textual scripting? That sounds like a winning strategy to me.
Designers ''do'' learn ad-hoc batch processing tools for image transformation and publication of the final work; having those standardized would be a net win.
> a programmer-driven choice for Markdown forces the Unix love of editing plain text onto everybody.
If you really want revolutionary WYSIWYG interfaces that much then learn programming and develop them yourself. But by then you probably will have recognized the superiority of plain text anyway.
Plain text is not superior, merely more well-supported because the libraries for it have been around longer, and standards for it had been created. If adequate standards for WYSIWYIG were created, it would be definitely better than plain text from an end-user point of view.
In fact, there are numerous cognitive drawbacks that can by corrected by using secondary notation (available either in WYSIWYG and markup), so unadorned plain text is objectively worse than rich text - if only good tools existed for the latter; the author is right in wanting hackers to make those ubiquitous.
The problem with WYSIWYG is that it is hard to automate (and is not composable). I have already worked with a "WYSIWIG tool chain" : you can't build something without someone hitting some button on a graphical interface multiple times at periodical time. It is a real pain when you want to set up a CI server. Or simply if you want to execute a build during your lunch... I have no issue with WYSIWIG per se, but you have to provide me with an alternate path to complete the same task automatically/remotly/with a cron job, etc. So the original poster might have relevant points regarding visual design, but there are reason why developers tend to favor scripts, shell, terminal, it is not purely a cultural issue.
That's not an inherent property of WYSIWIG. You could equally say that "The problem with building code is that it is hard to automate (and is not composable)", which is true of most programming languages, but then there are IDEs that largely automate tedious programming tasks and there are languages designed for meta-programming; what is lacking is good support from dedicated tools, and a design for automation.
I have some ideas of my own on such system could be built, but fortunately many other independent initiatives have emerged lately in that direction (mainly thanks to the ideas by Bret Victor, and all the recent advances in the intersection between Reactive Programming and user interfaces).
I wholeheartedly agree it is more an ecosystem issue, but like I said I have no issue with the concept of WYSIWIG itself, just that as of today, they are lacking some characteristics for being efficient. And it is true that the lack of innovation in that area might be a result of habits or culture.
I believe, contrary to almost all programmers, that this is in fact fundamentally a cultural problem, as the author points, out, and NOT a practical or technical problem. I think the problem is more fundamental than the author suggests though.
Fundamentally programming (or software engineering, web development, etc.) is defined by editing textual source code. That's the definition. WYSIWYG is not, by definition, programming. By definition, WYSIWYG is sort of the opposite of programming. Programmers are proud of their ability to create textual source code that is processed to create programs or output.
Think about a specific programming activity. For example, creating a web application. If, rather than using code, I use a GUI tool to define the fields in my database, create links between tables, define and style forms for entering and displaying the data, am I programming? If that's the way I build web applications, am I a programmer? What if instead I find a Go-based Object-Relational-Mapper, write some SQL scripts to create the database schema, use SASS to define the styles for my web-based forms, etc. Now, did I do some programming? I created the exact same web application with two completely different approaches. Using the first method, I cannot call myself a programmer (some may argue this, but the general impression most programmers will get, perhaps unfortunately, is that I did not do programming and am not a programmer). In the second method, I used much more effort and achieved the same effect, but I did it by writing textual source code. Therefore in the second case I am a programmer. The difference is simply that I wrote textual source code.
Now, I am not saying that WYSIWYG editors cannot produce complex computer programs, even ones that are exactly the same as those created in text editors, or that I believe that WYSIWYG should NOT be considered programming. I think actually we should update our definition of programming to include WYSIWYG, and I believe that old definition is the main thing holding back software engineering.
Its really as simple as that. Text = programming. Not text = not programming. That means we are stuck in the 70s forever. Its a cultural failure.
> Programmers are proud of their ability to create textual source code that is processed to create programs or output.
I fundamentally disagree with this. You're acting like programmers/hackers do things in a textual way just to justify their existence.
Those GUI tools you posit exist—my Dad (who is technical but not a programmer by trade) used FileMaker to develop a little app for his church sometime in the early '90s and still to this day uses it. Does it matter that he didn't write any code? No. He produced a useful app and it's that end result that matters.
The problem is not that those tools don't exist, and it's not that "real" programmers look down on them just because they aren't textual. The problem is that those kind of tools are just fine, as long as you don't do anything complicated. Once you stray a little off the path of their standard use case then they crumble into a pile of leaky abstractions.
When you're dealing with textual programming languages it's generally much easier to stray from the beaten path since they are usually much less constrained. If you're writing a web app in Ruby, or Node, or Java, or whatever, you can do anything. It's more flexible, but at the cost of being more complicated. And the more complicated things get, the harder they are to shoehorn into a gui.
I'd love to see a gui programming language that was as open ended, as easy/quick to edit, and as easy to visualize as textual code in a text editor is, but they don't exist. And I fear they never will.
> —my Dad (who is technical but not a programmer by trade)
If he wrote it in a 'real' programming (textual) computer language you might have written that differently. You might have written "technical but not really a programmer" or something.
> Does it matter that he didn't write any code? No.
Yes, it does, because if he had you would have given him credit for being a programmer and acknowledged that his application is complicated.
I wonder how long it took your father to create that application with FileMaker. I would like to see how long it would take you to code it "from scratch" using your favorite "real" programming system.
> If he wrote it in a 'real' programming (textual) computer language you might have written that differently. You might have written "technical but not really a programmer" or something.
I wrote it the way I did because it's more nuanced than that. My dad can program. He's written assembly and knows some C. But he's not a programmer by trade—he's a hardware guy.
I think you will be pleasantly surprised in the next ten years or so. The seeds for such programming languages have been everywhere in the last couple of years or so, if you know where to look.
I agree that "draw and connect boxes" is an awful interface for general programming. But I've used it and it's terrific for some data manipulation tasks; and there are other visual paradigms that are not based on Boxes and Arrows (such as UML/model-based architectures, visual constraints, "Programming on Principle", whole-execution "rewindable" debuggers...) that offer substantial benefits even for some more general
- so there's people who ''do'' program this way, and find it better in some scenarios.
The idea is that, as computers are more powerful and can support visual tools that were impractical in the 80's and 90's, more programming styles can be created that aren't based on "one-dimensional stream of characters". Heck, even syntax highlighting and Python's "significant indentation" provide visual usability advantages over earlier language approaches. There's nothing inherently superior in the sticking to the old ways other than "it has always been that way".
The computers were strong enough to make visual tools possible even 20 years ago. It's the clumsiness and the limited expressiveness of interaction that limited the acceptance. I know: I was there. I remember reading Stroustrup who around the beginning of C++ wrote something like "we don't have to worry about ineffectiveness of headers in C++ because we'll anyway use visual tools soon." There were actually some tools. (http://en.wikipedia.org/wiki/Rational_Software "Rose 1.0 was introduced at OOPSLA in 1992") They stunk: http://c2.com/cgi/wiki?RationalRose
However, IDEs are alive and well, and browser DOM is stronger than ever - with "build your app" tools around each corner.
I doubted to put UML in there precisely because of Rational Rose and because UML didn't finally work as the basis for software architecture, but in the end that's still just a glorified C++ code generator - not an execution environment. Of course when your visual tool is a front-end on top of a traditional code it will have problems - you'll need a language that is designed from the ground up to work in a visual environment. And part of the reason why "visual environments" weren't created 20 years ago is because you couldn't execute the final program in visual form, in real time.
End-User development tools (rule-based like Agentsheet, graph-based like NoFlo or Blender graph editor, or even those based in traditional programming like Alice, Scratch or Code Combat) can be quite successful.
I'm working on a school project using salesforce right now, and it's basically what you describe, with making a web application through a GUI. It seems like it's probably a good solution for a business that needs an information system without much customization required, since it does make assumptions that can be constraining. I didn't enjoy working with it; it's less fun than programming with text. And I'd tend to call it "application building" rather than programming. Warcraft III's modding tools are similar - they let you build a game without very much typing. (In that case, though, the GUI didn't sap the fun out of it - probably because there was more room for creativity in Warcraft.)
I think there may be technical constraints in making such an interface customizable enough, though. Salesforce and Warcraft make a lot of assumptions about what sort of program you're making. An application-builder that's as general-purpose as a programming language sounds like it'd be really hard to build. (Although I'm not sure how much harder that is than just building something like Drupal.)
We also don't have a graphical way of describing behavior, that I know of. Even in tools like Salesforce and Warcraft, custom logic is shown as plain text (though Salseforce gives you some buttons to insert variable names and operators). It sort of makes sense to just use text, if the alternative is making people learn a new set of symbols.
You would tend to call it "application building" because like I said, if you are primarily using a GUI, its just not programming. By definition. By the way, I am not saying that there is no place for text in programming. I am just saying that the definition that it is only text is holding programming back.
That's the "curse of End-User Development" (akin to the "AI effect" [1]): as long as any kind of automation task is made easy enough that end users can perform it reliably, it's no longer considered a programming task. It happened to spreadsheets, retrieving records from databases through GUIs, publishing and styling blogs as you explained, or the mere possibility to copy-paste large amounts of content (clearly a memcopy loop operation, but nobody sees those as programming anymore).
I find this idea of a cultural problem interesting, and in my opinion that we program with plain text is essentially a completely arbitrary choice. That said, in practice almost all of the non-plain-text tools I know are some kind of special case tool and then you still need to write source code for all the pieces of the program that don't fit the mold of the GUI tool.
It's not as much arbitrary choice as inertia. When the first computer development tools were built the idea to use plain text made sense, and we've been stuck with that decision for more than 50 years. Now our computers are powerful enough so that this technical limitation can be overcome, but the cultural thing and lack of adequate tools are holding the change back.
The book ''The Humane Interface'' described a system that could use WYSIWYG in an environment that allowed general computation, and that book has greatly influenced all modern interaction design, but we're not still there because of all the legacy practices that are still important to the industry.
> I believe, contrary to almost all programmers, that this is in fact fundamentally a cultural problem, as the author points, out, and NOT a practical or technical problem. I think the problem is more fundamental than the author suggests though.
Prove it to me by explaining your complete point without text, in pictures.
On a more serious note, we use text to represent symbols, because symbolic coding is powerful and pretty much the basis of our civilization, long before computers existed.
Text is powerful enough that I can talk about "Mercedes S-Class car" by hitting a few buttons, instead of spending the rest of my day drawing a Mercedes for you.
Symbolic coding can be expressed via graphics too, but it remains symbolic. Say, I could draw the logo of Mercedes, then draw a symbol resembling a car, and I wouldn't have to draw an actual Mercedes car for you, but what's the benefit compared to typing it? It's again the same, only much less efficient. Egyptians used to do it like this, but they've since changed their mind about it.
If you'd represent a program as a visual graph of function calls this is still not WYSIWYG, it's just another set of symbols representing the same abstraction. When you run the program, what you "get" isn't a graph of function calls, it's something else entirely.
So it comes down to what is more flexible than text for representing symbols, and honestly I'd like to see anything better than text.
It's not just source code which is based on text. We're communicating right now via text.
The web is build on text ("hypertext"). Web search is based on text.
Look around yourself. The majority of communication is symbols, and most of it is text.
You're really arguing against a straw man or a few of them. In other words, I wasn't trying to say that we should create programs without using any text or by drawing. Or that we should communicate without text.
Obviously, text is going to be important for programming. What I am saying is that the definition that programming only happens when you are writing textual source code is wrong and holding us back.
Start with the idea of intellisense or autocompletion for method/function call names and parameters. If this happens inline in the text editor, we are still programming, right? What if my intellisense/help for parameters etc. pops up in a box on the screen? What if it is a dialog with a description of the method, and descriptions of each parameter? Now what happens if for each boolean or enum, I have drop downs or check boxes? Now what happens if I can drag functions out of a toolbox into my code window?
What happens if I can also drag entire components into my project window and connect them to my database structure editor? But now I can also type some code into part of the editor to create some custom functionality. Am I programming? What if I didn't have to type any custom code at all? Am I still programming? I didn't write any code, I just dragged and dropped, checked some boxes and connected some components.
For as long as I remember IDEs have had said drag-and-drop interfaces, usually centered around the coding of GUIs, and no one has said that just because an IDE has a visual form editor it's not programming.
You're the one making up the distinction and arguing against it.
The primary reason that not everything has turned into a drag-and-drop interface is that it's highly inefficient. It remains a fact that compared to manually hunting for class Socket in tab bars and scrollable lists, just so I can drag it and drop it somewhere, it's far easier to just use your keyboard and type "new Socket" in your code, especially with autocompletion at your service.
The main reason I prefer to work in a text editor, is it limits the number of tools I have to learn to use.
When I used to work as a 3D designer, I became an expert at Cinema 4D. But there were times I had to switch to another application due to the limits of C4D, and I would become lost in the UI of Maya or Max, unable to perform the simplest of actions without the assistance of Google.
My preferred text editor is Emacs, and I rarely need to use another application other than Firefox. With my text editor I can code, write markup, keep a calendar and todo list, open a shell, chat on IRC.
In a WYSIWYG world, you have a dozen different applications to perform a dozen different tasks, and each requiring an in depth knowledge of that application. Yet in reality, each application is simply editing plain text.
Your argument seems to be that switching between WYSIWYG applications can be confusing whereas not switching between text based tools is not. Presumably if you were continually switching between multiple text-based editors, each with their own abstruse command key combinations, you would find that even more difficult than switching between graphical tools.
Switching between WYSIWYG applications is confusing and it would be if I was switching between multiple text editors as well. It would be a nightmare if I had to use Vim to code, Emacs for my calendar and Sublime for markdown.
I don't have to though, as I can do pretty much everything I want from a single text editor. Yes, there are different 'modes' to master, but most of the underlying commands are shared between modes so the learning curve is shallower.
Accepted, but when your argument uses it as the example to follow it's very hard not to criticise on that basis. I think a more appropriate tool would be Microsoft Visual Studio.
Talking about visual tools, many moons ago I worked with NetObjects Fusion (http://netobjects.com/html/website-design-software.html). It actually dumbed down development and did not help you understand the underlying technology.
I honestly think in this domain tools that 'hide' implementation from you are not healthy. I do think frameworks have a major role to play, but they 'enable' you to do things better and in a more consistent way.
I agree that hiding the underlying technology is not the way to go; though I believe that highly automated "dumbed down" generators like NetObjects may be problematic mainly because of an impedance mistmatch between the interface and the underlying implementation; not because that creation style is essentially wrong.
But what if the underlying technology itself were made consistent with that style of development? Things like the Smallest Federated Wiki [1] make content closer to the metal, so that there's just a thin layer between presentation and storage (just like there's one in plain text editors).
Also Wikipedia shows how a mostly-visual environment can be the basis for complex categorization and building semi-automatic processes. Most semantic tasks there are handled purely with wiki markup, including the triggering of automatic bot edits against vandalism or for cleanup, or building ; and that markup is susceptible of being made visual (as the new Visual Editor shows, although that's a bit too visual for my tastes).
Sure, building new templates and bots on the MediaWiki platform still requires classic development, and there will always be tasks that are better handled with textual descriptions, but there are still huge possibilities to move general development towards a mixed visual/textual environment, keeping the best of each.
"What you see is what you get". Great! - however it assumes that your audience is using the same media presenter, e.g. use Word to depict an A4 and you print an A4. What if I want it as a webpage as well, do I have to type it twice now? Does someone remember how horrible this even is when you export a webpage from Word. This stuff is not solved, and is only getting more difficult in the age of 3" up to 70" screens
One of the goals the VPRI team had in the STEPS project was to put together a WYSIWYG document editor. They did so within their 20,000 total lines of code limit.
Which isn't to say that doing it is easy just because the amount of code needed is small - it's more a statement about the industrialized model of coding that we've chained ourselves to. Nobody can afford the pure-research grade of time and dedication to build and iterate towards a beautiful design that is not just engineered well, but aspires to a mathematical grade of implementation with not a character wasted.
So we build ungainly things to meet our hair-on-fire needs, and copy things out of them when happenstance allows them to be reused. And so in all likelihood we're constantly, invisibly shortchanging the future because we have to focus on shipping our shovelware instead.
And I think the text files end up being part of that, because they're precisely the protocol that makes it so easy for stuff to be copied _inelegantly_. We bend everything around whether it's easy to copy from text and whether the editor we already use can read it.
Doing a WYSIWYG tool which doesn't get in the way of advanced users is extremely hard. Look at 3D content creation tools like Maya, this has been built from the ground up to be programmable and extensible over nearly 20 years, while still being an intuitve "WYSIWYG" tool. Someone completely new to 3D modelling and animation would still be completely lost, but that's not the target audience (that's where I lost the author, does he want to empower 'noobs' by giving them the illusion that they can accomplish great things without learning, or does he really care about productivity for power users). In my experience, good tools can have complex (UI) interfaces as long as the interface is discoverable, and they should be automatabable/customizable through programming interfaces. I see myself drifting back to the command line in the recent years, just because many things don't require the overhead of an UI.
There's no rule as to how good a WYSIWYG interface can be. Make one that's more productive than the corresponding text interface and we'll use it. Hackers aren't holding back anything.
Are you sure of that? Object orientation was a visual programming environment at first (see Smalltalk), and look at what hackers "from the Unix tradition" have made of it. ;-)
Some problems and their evolution is better understood if you focus on what's wrong instead of what's right. Either you have every requirement covered or you're going to fail.
There ain't no justice here. You may have created some brilliant architecture, object system, whatever. But if --let's say-- it's too slow, people won't use it.
I think wysiwyg editors are also only useful in certain cases - CMSs, blogs, DTPs - the sort of systems where the data entered is the information you will see. There are large categories of applications, however, where the data entered is first aggregated/manipulated before it is useful for any sort of output - in these cases, wysiwyg is just not the way forward.
This reminds me a lot of Naviedge (http://naviedge.com), something I made for hotel concierges a couple years ago. A huge part of the technical burden was making the itinerary editor WYSIWYG in a cross-browser, mobile-friendly way. Much harder than it looks.
It is not fear, when I design for me I don't do a WYSIWYG unless it has some benefit or trade off for me. Now that benefit can be monetary i.e. I get paid to do WYSIWYG. I'm a selfish git I know... but that is the way it is. I'm not afraid of it, I'm just being me.
WYSIWYG is a good thing in predominantly visual programs (photo editing, vector illustrations, animation tools).
WYSIWYG becomes a problem when the content being entered has semantics without explicit visual cues. By definition you can't have a "WYSIWYG" for semantical information, and if you give people a WYSIWYG HTML editor to enter content, they'll just tweak the code until it "looks" right, which will bomb when said content has to be transferred to a new medium (say, non-HTML).
People may be liking Word, but the majority don't use it semantically, they just mash the font settings until the document looks approximately correct, never mind the document outline is a mess, because half their headlines are not headlines, but paragraphs with enlarged text.
You're right, I think the author has a point but putting it all under the umbrella of "WYSIWYG" is not helping it.
For instance, I agree that the current mostly text-based interface to programming could be improved dramatically by abstracting ourselves away from the source files towards a more visual representation of the code, showing dependencies between functions etc... Syntax highlighting doesn't quite cut it. I wouldn't call that "WYSIWYG" at all though.
In essence most source code is a graph of function calls and it's very hard to represent that in plain text. There have been some efforts in that direction (lately I remember the light table editor for instance) but at the end of the day I'm still editing ASCII files in emacs...
What we need isn't WYSIWYG, it's better visual representation of the data we're manipulating.
I notice your response was in the form of a few paragraphs of text, rather than a photo or diagram. Most conversation are "text-based". Our human brains have hardware acceleration for text processing and we're quite good at it. Since programming environments exist to make the best impedance match between programmer and machine, they use text. Language uses incredible composability to encode great complexity.
You'll also notice that while the comments are all plaintext, they're presented hierarchically as nested trees. There's also a hyperlink in your comment that I can follow easily, that's not plaintext.
I'm not saying plaintext is bad, I'm just saying that we're not stuck with 80x25 green-on-black terminals anymore, there's room for improvement using our modern display capabilities.
Why yes, we're not stuck with 80x25 green on black terminals.
Why even mention it? By the way that nested hierarchical tree is also written in text (HTML and CSS). So is the hyperlink. URLs? Text.
Check any modern IDE and you'll see similar visualizations over text based source code. I can explore my project in a tree-based outline in my IDE. I can open my classes as an UML diagram.
Our human brains have hardware acceleration for image processing as well. Limiting ourselves to text is wasting half our natural resources.
Flowcharts and LabView are old paradigms, they have and are problematic for abstract tasks (they're only good for data manipulation). What's needed is a new style of programming that combined the best of visual and textual representations. Wolfram Language [1] and Bret Victor's Inventing on Principle [2] are more modern approaches in that direction.
I think you've identified an interesting problem in the "cultural dialog" around this subject - that legacy terms like "WYSIWYG" and "Visual Programming" are holding back conversation.
The core idea in the article (that "Hacker Culture’s bias is holding back interface design") is a solid one, but a new term would make it easier to identify the problem without all the implications carried out by the broken tools of old.
Maybe some new term like "Rich Visual Interaction", or "Web Application IDEs", or anything that ties the idea of development with using modern visual technologies and representations would help.
> By definition you can't have a "WYSIWYG" for semantical information, and if you give people a WYSIWYG HTML editor to enter content, they'll just tweak the code until it "looks" right, which will bomb when said content has to be transferred to a new medium (say, non-HTML).
That problem can be mostly fixed with clever programming.
Does the content seem to include a bunch of one-line paragraphs, all of which share roughly the same font that is different from the rest of the content? Treat them as headlines. Have a bit of tolerance for people who can't distinguish 13pt from 14pt, but if the sizes or other styling differ considerably, try to decide which ones are <h1> and which ones are <h2>, etc.
Does the content include a bunch of consecutive paragraphs that begin with numerals (1. 2. 3.) or hyphens/asterisks? Turn them into <ol> and <ul> respectively. C'mon, even Microsoft Word knows how to do this, and the user can override it if the program guesses wrong. There's no reason why an open-source HTML editor can't do the same or better.
You could even have the best of both worlds: the editor might recognize Markdown syntax and automatically convert them to the corresponding HTML markup for live preview. For example, if you begin and end a word with an underscore, put an <em> around it. (You can already configure Microsoft Word to do this.) If you begin a paragraph with ###, >, or four spaces, turn it into <h3>, <blockquote>, and <pre><code> respectively.
Okay, but what if a piece of user-submitted content is so messed up that it is impossible to get any semantical information out of it? Well, think about it this way: whoever produced that content probably didn't intend to convey any semantical information anyway. By failing to determine the semantic structure of that content, the computer is actually guessing the author's intention correctly. Just slap a CSS normalizer on that piece of cow dung, and it will look more or less the same in all current and future browsers (which is probably what the author intended). The only thing that matters is that you don't produce such content. If someone else wants to shoot themselves in the balls with Comic Sans, why stop them? It's what they want.
The primary reason why WYSIWYG editors carry a stigma is that the first generation of popular editors, such as TinyMCE and FCKEditor (the precursor to CKEditor), tended to produce horribly broken markup. But that was 10 years ago, and now we have much better editors. And we can make them even better if we want to.
The author is glossing over the biggest issue. Nobody writes web pages like they use word. Nobody just types up an html page from <html> to </html> and then moves on to the next page—WYSIWYG editors would be great at that.
The author mentions "static site generators", and that is the crux of the issue. Nowadays HTML is usually built by templates, sometimes even transpiled from something else entirely (HAML, etc). There's still markup but it's surrounded by template directives which can be fairly complicated turing complete languages in and of themselves (Template::Toolkit and HAML both will let you drop down to the parent language if you need to).
Even if there were some sort of nice, usable, visual programming language, WYSIWYG would still be problematic since you would have to show the loops and conditionals and such superimposed with the HTML (otherwise how can you edit something that isn't shown) and as soon as you see the structure of it all, it isn't WYSIWYG any more (by definition).
The fact is, we in the hacker culture don't embrace WYSIWYG and hang on to our terminals because we're stubborn old codgers, it's just that we've experienced both and we know which one is better.