As AI and automated systems become ever more complex, it is easy to forget how important it is to have points where humans intervene.
DAO and Etherium promise programmatic "smart" contracts that do away with the need for a legal system. You write a contract as computer code and it essentially enforces itself.
As great at this sounds, the recent hack shows that this simply cannot work. Code will always have bugs or perform poorly on certain inputs and humans need to be able to step in when this happens.
Because the code of the DAO is a legally-binding contract, how can you argue so as to convince a judge that some behavior of the program is really a bug? The DAO provides nothing other than its own code to specify how it is supposed to work.