Hacker Timesnew | past | comments | ask | show | jobs | submitlogin
Ask HN: How do you see the future of computing ?
24 points by jacquesm on July 18, 2009 | hide | past | favorite | 43 comments
How do you envision we will be programming computers 10 years from now ?

20 years from now ?



I hate to say it but 10 years isn’t that long a time. Keep in mind the HTML 4 spec was approved about 12 years ago and we’re still using it. So I think a lot of the building blocks we’ll be using in 10 years are probably already taking shape (HTML5 for example)

That said I think the improvements we will see in 10 years will be based around the maturation of frameworks. In just about every environment, be it Rails or .Net or whatever, you see programming in general moving to a higher level. People using mature algorithms to focus only on what they need to customize and in doing so making programs that are easier to use while being a lot more stable.

I think that leads to the 20 year question. As programs become more stable and a generation of kids grows up with computers they trust not to crash I think you’ll see computing in general change. Add that to cheap, widespread touch screens and I think you’ll see a lot of things that are currently "physical" become "virtual". You can already see this now with things like EC2 which turns physical computers into virtual constructs that can be manipulated through software.

Think about it. Other than your oven or refrigerator what else do you even have in your house that couldn’t be made virtual? Your entertainment system, your computer, your alarm clock, etc... can all be moved "behind the screen". Even the oven and fridge can be made into components that interface with a virtual console.

That’s when the fun really begins for programmers because that’s when we get into the business of creating virtual worlds. I don’t know how that will take shape exactly as far as programming is concerned but I assume you’ll start to see hybrid functionality where physical interaction becomes a lot more important than back end functionality (which again will be largely covered by frameworks at that point). In many ways it will probably mirror how physical electronics are created now. Most electronics are built using pre-fab Integrate Circuits that do most of the work. The manufacturer just creates an appearance and a user interface.

Anyway, that’s one man’s opinion.


10 years from now.. too short to tell, but 20/30 years from now, I have strange thoughts.

I do believe that in one day, Information Technology will invade each place, home, school, office, road... anything that exists in everyday life... I dream that everything in life will be connected and Internet will become more important than oxygene, something like cars, TV and cellphone now.

The IT world will be very promising, it'll be big and no one can control it. I think. In my opinion. The IT market will be so big that even giant companies like Google and Microsoft will be nothing rather than companies dominating a niche with a considerable market share.

I think in the future, the IT will grow strong, companies like msft will grow their sales but loose more market share, as the IT growth is stronger.

I think it's time for many "big" companies formed of 3 or 4 poeple. Look at startup now, it's just the begining, how was the web in 95? it was a few number of websites, they made good income and dominated the web (like yahoo.com and msn.com) now they still make money, but the web grow in a way they can't control it and the market is far big than their potentiel.

I think the future of IT will be so complicated that you can never, anyone, understand it... many new things everyday, startups launched each seconds... and many news, ideas and discoveries... It'll be fantastic, but we'll miss the days of the simple Hacker News interface.

meet you in 2040, hope this thread still exist...


I had to walk a bit to think this through. My mind works best when walking.

I think touch screens will be cool toys for the toddlers, but adults will use something different. Think about it, your fingers are blocking your view just when you need it the most.

I don't know what will happen, but I hope some technology would get the drones out of those cubicles. Something like chorded keyboards and holographic eye-wear combined with wireless network.

If some country would start to teach the use of chorded keyboard in elementary school, it might be a huge leap for the economics of that country.

If you work in small room, your ideas will be smaller too, if you work while sitting, your mind will be sitting too.

Nowadays programmers work to make software for other people. That will change. In future people will increasingly do their own software as programming is already taught in some schools as a basic skill. And programmers will increasingly make software for other programmers as the percentage of programmers/workforce constantly rises.

Both of these will shape programming towards "write what you want, and the machine will optimize it for you".

Actually some of the programming might not be writing at all. Some people are way better with images than words, and even today dudes draw pictures to help themselves to grasp what they are doing. Look at well formed C and the python, indentation was meant to help, now in python it's the way of doing things. Same might happen for other stuff originally just meant to display the info a bit clearer.


I have no idea about input devices, but this I agree completely with:

> In future people will increasingly do their own software as programming is already taught in some schools as a basic skill.

Especially scripting-languages have evolved hugely, and I think we're pretty close to where "regular" office-people will be expected to put together small scripts to get things done. Instead of going through a complicated change management process to add a button to some in-house app to save some time, users will increasingly be empowered to script such a thing themselves.

Programming is increasingly de-mystified, and it's really no harder tying together a couple of objects in, say, Python, than to do a proper mail merge in Office.


I couldn't disagree more with this. Scripting is in many ways "glue" that ties to programs together. It's not perfect and it may not even be really necessary if the programs were designed to be extensible in the first place.

As soon as you go beyond "glue," you run straight into this http://www.xkcd.com/568/ (second panel). No matter how high level your programming language, it doesn't read minds. When we write programs we trivially understand that a program is a precise description of what needs to be done. Sure we have more advanced primitives for doing stuff nowadays, but we still need to say exactly what we want. Remember, if your programming language is ambiguous, that's a flaw. I believe there always will be a clear distinction among programmers and non-programmers, as I don't believe that this will ever change. 90% of people want to say "make a cool game" and have it happen, regardless of whether or not that actually works.


I predict that the future of computing (in business anyway) will have much less to do with technology that with what applications we write, who we write them for, and how we write them.

10 or 20 years from now, 100 million corporate drones will be sitting in cubicles looking at screens that look more like 37Signals or Wufoo than SAP or Oracle. Nimble web apps will steal the enterprise away from the monoliths just as surely as the PC stole it from the mainframe.


Well, I expect that most programming language research in the next 10 years would focus on exploiting multicore programming. If we're lucky, we'll all be using something like Haskell or Erlang, and all apps will be multithreaded. Languages that choose to ignore multicore will slowly suffocate. Writing multithreaded apps will become much easier than it is now, if not trivial.

It's not wrong to dream, is it? :)


Not at all. I may note that our dream of parallelization (us systems people trying to implement it and the language designers trying to write it) is not easier or trivial multithreaded authoring, but implicit parallelization. Wouldn't it be unbelievably cool to just write your program and have it parallelized by the language semantics themselves? :) Come, join the dark side and take control of more power than you can possibly imagine! Muahahaha


Whatever we're using, there'll be plenty of tongue-in-cheek posts recalling the glory days of 'real programmers' implementing custom Hashtables in pure Java, no less, in order to avoid the synchronisation issues in their company's Enterprise Framework, or whatever.

And they say the magic's gone out of programming!


Perhaps. But who's mourning for the good old time of Cobol today?


Cobol time is still in place and more important than ever, governments, banking, military, all rely on it. Problem is that 'good ol' Cobol programmers are hard to find nowadays.


I can do it but you couldn't pay me enough to do it...


I think that Martin Fowler hits the nail on the head with Illustrative Programming: "When you look at a spreadsheet, the formulae of the spreadsheet are not immediately apparent, instead what you see is the calculated numbers - an illustration of what the program does. [...] Using examples as a first class element of a programming environment crops up in other places - UI designers also have this. Providing a concrete illustration of the program output helps people understand what the program definition does, so they can more easily reason about behavior."[1]

Illustrative Programming will fit particularly well with, and benefit from, programs that collect, crunch, and display data - this being area that I agree will see a lot of growth and attention.[2] "The ability to take data—to be able to understand it, to process it, to extract value from it, to visualize it, to communicate it—that’s going to be a hugely important skill in the next decades,"[3]

Tools that are more suitable for large and complex data sets than Excel, and easier to use than R will become available. These will be used, as many current programming languages are, by people that are not trained in software development. Visualization will become much more important, so programming tool usability will improve. We'll see the influence of statisticians in our programming tools. Chasing down posts by others with similar problems and bugs will become easier. I'm not sure how much headway Illustrative Programming will make into the more hackerish areas of programming, though I hope it's a fair bit. For the best predictions on what languages will be like, just look critically at the languages that are at the start of their lifespans now (http://mythryl.org/ ? Arc?) and also at current research.

I think, though, that the most interesting changes will come from the sort of programs we'll be writing, the sort of people we'll be working with, and the sort of tools we'll have available, rather than from some feature x or y of some future major language.

[1] http://martinfowler.com/bliki/IllustrativeProgramming.html [2] http://flowingdata.com/2009/06/04/rise-of-the-data-scientist... [3] http://flowingdata.com/2009/02/25/googles-chief-economist-ha...


I don't think there is any indication whatsoever that programming the equivalent of a Turing machine will become any more "illustrative" than it has ever been. 20 years back we had buzzwords like "query by example" and we (almost) had VisualBasic.

What has changed since then? Certainly not much in terms of programming productivity. What has changed is that we program the web now instead of a PC or a Mainframe. The architecture has changed. Esthetics have changed. Collaboration has changed. And we can crunch a lot more numbers now. Programming hasn't changed much.

And I think it won't change in the next 20 years unless some hard AI problems can be solved.


You might be agreeing with me, and perhaps are missing my point: more people, especially statisticians, will be engaged in the programming and programming-like tasks that are required to work with ever-increasing data. Not structured-SQL data, but raw and ugly natural-language and legacy-format data. Many programming tasks in the future will be about data, and excel-like "Illustrative" programs are very well adapted, even if not Turing-complete (Interesting discussions on the turing completeness/incompleteness of even something like Excel: http://www.c2.com/cgi/wiki?ProductivityRant http://news.ycombinator.net/item?id=429477 ). Illustrative programming is just dynamic programming plus immediate feedback of program results - perhaps the ability to click on a GUI element, and change its associated code in realtime, perhaps just the ability to change how a data-point is calculated. The tools will be different.

Turing-completeness will always look like Turing-completeness, but this is hardly interesting or relevant. I think that your claim about productivity is entirely false (see something like http://www.cs.umass.edu/~yannis/law.html ), but that's beside the point of my post.


I do agree with you that data analysis is of growing importance (it's what I do after all). But I object to inventing a new fancy term for what you describe as "dynamic programming plus immediate feedback". We had that for ages.

"Illustrative programming" is just a classical case of a consultant inventing fancy marketable language. In Excel the illustrative parts are not the ones that involve programming in the sense of Turing completness. I think we need to use Turing completeness as a benchmark, or any interaction with computers can be called programming.

And I stand by my claim that programming something like KWIC has not become much more productive in the past 20 years. You could do it in Perl or in Lisp just as easily in 1989 as you could do it today. Even in C it's not that much less productive to flip and sort a few words than, say, in Java.

But productivity is admittedly a complex concept. Of course we have a much greater effect writing software today, but I think that has very little to do with the core techniques of programming. We may be more productive on average, simply because more people make use of techniques that fewer people were using in 1989.


20 years ago we wrote lisp on emacs, and we still do. The only thing that has changed is that the machines used for developing and running our programs have gotten faster. There haven't really been any gamechanging breakthroughs in software in that time. Chances are that there won't be in the next 20 years either.


There has been progress in SE though, the world is asking for more software and larger more complex systems to be written.

The progress in software engineering seems to be in finding good practices and giving them names, when they have names one can write papers about them where their usage in actual projects is analysed.

Example: there was a time when object orientation had not been coined as a term. Now that we have it we can move to the next step and explore variations on it. We can teach it, we can make languages adapted for it.

(I'm not holding up object orientation itself as an example of progress here (or the opposite), but the concept of giving a practice a name and exploring it.)

It's a great advantage if one can talk about 'unit tests' and a new hired person knows about it. People were writing what qualified as unit tests before the name existed, but now that we have a name we can write books about it, discuss problems in doing it, etc.

I'm sure bridge building and other engineering efforts have had much of the same development in their infancy. Clever individuals were doing successful things without thinking much about it they worked what they had, others needed to talk about it and books were written... and now we know arch bridges, suspension bridges.


Great points.

What I was trying to get through in the GP post was more that the changes we've seen in software are evolutionary and incremental. I can't really think of any revolutionary breakthroughs in software for the last 20 years.

As you point out that doesn't mean that progress hasn't been made, and we haven't been walking further down the road of enlightenment. It's just that we've been taking one step at a time down a straight road without many surprises.


"20 years ago we wrote lisp on emacs, and we still do."

but we write Java on eclipse ;-). IDEs have made a lot of progress in languages most people use in their day to day work.


I'd like to counter that with "IDE's are detrimental to programming because they enable bloatware to be produced on a scale that would be almost unthinkable without them".

The number of Java library calls is a nice example, what an unbelievable overkill of library functions.


Ten years from now, will not be much more different. Essentially computing needs a breakthrough to move to the next level. What this breakthrough is nobody knows. A program written in the latest fashionable language of to-day is essentially not much more different from one written in Cobol or Fortran or Ada 30 years ago. We refining the tools but not discovering any new fundamentals.

I expect such a breakthrough to be achieved via two possible paths:

(a) By brute force : the effect of interconnecting the worlds information via the internet.

(b) Parallel processing to be stimulated further by breakthroughs in bio-informatics.


Programs written in say Prolog, Haskell or Curry are definitely different than Cobol or Fortran or Ada 30 years ago. And different from Lisps (and perhaps ML) of that time too.


Only on the 'surface'. For example write a program to add a+b! The 'language ' maybe different but not the concepts. I am arguing that we need a fundamental breakthrough.


Not gonna happen. Sorry, but this is like saying that we need a breakthrough in "base 3" because "base 2" is too limiting. The fundamental physics of our computing is unlikely to change (I'm sorry but I don't see quantum computing being practical in the next 50 years, if ever), therefore eventually we'll still be programming opcodes on processor dies. However, if you don't believe that the underlying architecture of computing hasn't changed in, hell, the past ten years, then I think you've been programming in high level languages a bit too much.


In the type system:

  > data Zero

  > data Succ a

  > class Add a b ab | a b -> ab, a ab -> b
  > instance Add Zero b b
  > instance (Add a b ab) => Add (Succ a) b (Succ ab)


listen, people be asking me all the time, yo mos, whats gonna happen with hackin. i tell em, you know whats going to happen with hackin? whatevers happenin with us. if we smoked out, hackins gonna be smoked out. if we doin alright, hackins gonna be doin alright. people talk about hackin like it's some giant living on the hillside comin down and visitin' the towns people. we are hackers, we are systems hackers, we are life hackers, we are world hackers -- me, you, everybody. so -- hackins going where we going -- so... the next time that you ask yourself "where is hacking going?" ask yourself, "where am i going? how am i doing?" - and you get a clear idea.

speech is my hammer, bang the world into shape -- now let it fall! my restleness is my nemesis. it's hard to really chill and sit still, committed to projects, i write lines, sometimes won't finish for days. scrutinize my literature, from the large to the miniature. I mathematically add, minister, subtract the wack, selector, wheel it back, i'm feelin that. -- haha! from the core to the memory and back, you know the motto - stay fluid even in staccato.

-- altered lyrics of Mos Def's Hip Hop


Programs will write themselves using genetic algorithms and assorted machine learning techniques, and programmers will then become truly slaves of the machine. Then again, maybe I've been reading too much science-fiction...


I think the biggest change coming to the computing world is iPhone-type devices. These will become our PCs. We'll have hand-held apps, but we'll also be able to plug them into real monitors and keyboards. So one of the biggest challenges will be making one device work in both kinds of environments. I think the other challenge will be how to create virtual systems within these devices, so that we can use the same device for personal and work uses, yet keep the 2 completely separate from each other.


Easy, COLA: http://www.youtube.com/watch?v=cn7kTPbW6QQ

Any syntax you want, running on the same JIT compiler, but designed from scratch without marketing breathing down your neck.


Ian and the guys at VRP are amazing, but this (COLA) isn't the blue plane switch they're looking for.


It isn't? Why not?


I know what I said is cheeky because all of PLR is going in that direction, and for good reason. COLA and similar research really extol the best from each of the dominant architectures.

But I think that we need to move away from both object and lambda architectures rather than finding ways to marry them. In my humble opinion, the "blue plane shift" is in an entirely different ocean than the ones we've been swimming in.

I'll tell you a little about the research I'm working on: All software is created directly as key-value pairs which can self-assemble into labeled, directed graphs. These are understood directly by an OS which acts essentially as a simple layer over the hardware. No programs are compiled or interpreted. There's no assembly code. There's more to it; I'll email you.


That is the kind of stuff that I hoped to hear when I posed my question! If you could elaborate on that then I'd be most interested and grateful...


Sure; I'll email you as well.


please do! anfedorov@gmail.com


At some point surely all computers will be quantum, so that would have an effect if that's the case.

Edit: maybe its further away, but still worth thinking about.


I don't mind the downvoting if I'm vastly wrong, but please could you explain why this would be wrong? A few people have upvoted me, so they might be interested in knowing why its wrong too.


I think 10 years from now we'll be dealing with the post-peak-oil world and will be worried more about how the garden is doing than what programming language we'll be using when the power is on. ;-)


This will be solved with green-renewable energy


Sure, sure, stick your head in the sand and assume it'll be solved "somehow" with some vague renewable energy.

And who's going to develop this magical new energy source that will replace petroleum, the most storable, efficient, easily-accessible energy source the world has ever (or will ever) seen? (Short of fusion.)

We need facts, not vague hopes.


it's time, it's all about time! petroleum will stand 50 years at least, during that time, countries and companies are doing researches... they'll find something.

look at the new electric cars and motos? aren't they the future? maybe!


I have nothing to publicly back this up, but I think in 10 years we'll be in a whole new world when it comes to creating software -- really, a whole new world. It won't be recognizable to those around today. It's exciting, and it's waiting for us.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: