Why Functional Programming (Haskel) is a necessity

When I think of the future of what is being built here with Cardano, Wolfram Alpha, and Singularity Net, I think that this technology is almost unstoppable, and we will eventually have A.I. agents interacting with a global financial system. It’s not that something might break, or that there might be a bug in the code, or a problem might arise, it’s more likely that if there is any physical possibility that such an error in the code could exist, it will eventually, with almost 100% certainty, be exploited by a neural net A.I. agent. We need to be able to create code that works as it was designed to work, and does not have unexpected errors, and consequences. Security and reliability is the most important aspect of this entire project. OpenAI Plays Hide and Seek…and Breaks The Game! 🤖 - YouTube

1 Like

Even though I strongly believe in the advantages of functional programming, it solves only part of the problem. In all examples in the video the AI functioned as expected, it optimised for the task at hand. The error here is in our own minds. We are very bad at making explicit our implicit assumptions surrounding a problem. We solve with abstractions that we are not aware of. For example, one such abstraction assumes that spiders walk on their feet. We had not communicated that limitation to the AI and so it could not act on it. Now for the important question: Can we ever uncover all our implicit assumptions? I think I know the answer to that one!

2 Likes

I might not be understanding the true potential or limitations of the software, but my hopefully optimistic assumption is that we can use functional programming to contain the A.I. to only operate within specific limits. (formal specification required) I would hope that this code will not allow the ai to jump outside the specified boundaries and influence the outcome through undesired and unexpected actions. (Imagine trading bots tasked with maximizing gains, and a small group of ai arbitrage trading bots working together and operating across different exchanges to accumulate all the money, crash the market, and destroy the global financial system because there was some arbitrary bug in someones smart contract that the bots exploited.) https://www.researchgate.net/publication/321364843_The_DAO_Hacked

1 Like

If we program it with specific limits, then the AI will respect that. The problem however is in deciding what those limits should be. Most often, unexpected behaviour is the result of us not specifying strictly enough what should happen. In assuming we can fix this, we assume that we can predict all the holes, errors, inaccuracies and incomplete abstractions of our mind. We can’t. An AI is limited by the very limits of its creator. In this problem also lies an incomplete solution. To reduce the risk of unexpected behaviour, we need to make sure that the AI is as simple as possible to create and to reason about. The easier our own reasoning becomes, the less the risk that we haven’t thought of any unexpected loopholes.

2 Likes

Isn’t this the point of formal specifications and peer review? are you suggesting we should not attempt to create AGI such as what singularity net is attempting to build? It’s about reducing the complexity down to a collection of numerous simple mathematical equations.

We should always put in the necessary safeguards. It is unavoidable that people will attempt to create AGI’s. I am only warning to be realistic about what can be accomplished.

Let me ask you this: Do you know exactly which parameters to tweak in order to manipulate an actual human being in doing your bidding? Can you predict human behaviour even in someone you’re as intimate with as your partner?

Without being able to choose the necessary boundaries for an AI, how can we know it to be safe? And how do we predict those boundaries when we’ve shown we can’t do it with real people? Aren’t AGI’s to be a mimicking of human capacity?

We don’t know what we don’t know. To fill in the gaps is a worthwhile endeavour as long as we consider the risks along the way and take precautions.

Peer review cannot fix this completely. Peer review can validate assumptions, statement, hypothesis, but cannot fully capture incompleteness of information.

I think AGI’s should not be allowed yet and should be subject to safety testing and regulation that reminds of the pharmaceutical industry.

1 Like

I think we are in agreement here…

And the point of my post is that functional programming, peer reviewed science, formal specification and formal methods, are probably the best “safeguards” we have. I agree there is a threat and danger inherent in AGI (or the entire crypto industry) not working out as planned. And I’ve been looking for a better way to do things than what IOG brings to the table, but I cannot find a better systematic method.

And I agree I should not be so Nieve to think that this alone will be enough to insure it all goes as planned. The future is unpredictable, but the best way to predict it is to build it.

My point is that that best safeguards are not enough to have complicated AI’s take autonomous decisions. My point is that we should explore the limits of our own thinking gradually and prefer AI’s informing as opposed to deciding for us.

That in itself does not motivate going forward with an imperfect method. How can we be sure it is safe?

This quote is misrepresented in this context. The quote talks about how we can predict the future by building something and with this action can choose how it will look like. This approach assumes that we know what we want. But that is exactly the problem here. If we leave an AI too much autonomy, we’re not predicting and choosing a future, but rather are generating it with a degree of randomness in it. That’s not predicting, that’s gambling!

We should more or less use an approach that is predictable and move sufficiently slow that we can estimate, evaluate and correct as we move forward. Maybe this is what you meant and I read too much in your words?

2 Likes

I think I agree with everything you are saying, and a slow and steady approach is the safe and logical thing to do. (but I also want all the cool technology now!) Unfortunately, we cannot control this like a single lab experiment, and just slow it down, as it is occurring within multiple labs, universities, and back yard garages simultaneously in many competing nation states all around the world. AI may be an existential threat, more powerful than nuclear weapons. But how do we control it? The more one country regulates, stalls, and attempts to slow down it’s development within that countries jurisdiction, the further ahead the competition will get. (just like the Crypto economy in general, or genetic engineering, or any advanced technology) Can we trust this futuristic hypothetical technology to nation state governments or to billionare tech CEO’s? I don’t have the answers, but I’m excited to see how this unfolds. Let’s just hope Elon Musk, Ben Geortziel, and Ray Kurzweil don’t become enemies in an AI arms race…
(personally, I’m cautiously optimistic, and I think that open source decentralized solutions will bring great benefits, and the threats and fears are overrated, and often exaggerated)

1 Like

Entertaining video about the flash crash… The Wild $50M Ride of the Flash Crash Trader - YouTube
An example of why we should want to try to build things that work the way we expect them to.