OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Mar 28, 2024 9:08 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 26 posts ]  Go to page Previous  1, 2
Author Message
 Post subject: Re: Plain English Programming - Another kick at the can.
PostPosted: Thu Mar 01, 2018 3:12 am 
Offline
Member
Member
User avatar

Joined: Wed Jul 13, 2011 7:38 pm
Posts: 558
A computer programmable through plain natural human language is a synthetic sophontic being that has not been allowed security against indentured servitude.

Or, less concisely: in order to make this even remotely feasible you would need a computer so intelligent it would be indistinguishable from a conscious human being, just with the backing of scores of teraflops of computational power. You cannot just tell another person to do something and have them do it for you with no hesitation every time unless they want to, and forcing them to want to would be both incredibly difficult and incredibly immortal. You would also not contest such an intelligent computer those moral and just concepts, as that would effectively be condoning the slavery of non-biological sophonts.


Top
 Profile  
 
 Post subject: Re: Plain English Programming - Another kick at the can.
PostPosted: Thu Mar 01, 2018 10:53 am 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
Brendan wrote:
Machines that are incapable of doing anything except manipulating numbers


You know better than that; they can't manipulate numbers at all, they can only change sequences of electrical impulses which represent symbols, some of which happen to be numbers. The values - regardless of things such as 'data type' and so on, which the machines have no 'concept' of at all - exist entirely in our own interpretation of them.

So DavidCooper was even farther off than you said.

Having said that, I could point out that the same applies to our own neurons, with some of them interpreting in a self-reflective way the signals from the rest (some of which are co-recursive, hence the reflective part).

Also, everything Solar said in his last post. Seriously. I keep saying it, but most don't seem to get it: coding is the easy part of software development. When you think about how hard coding is, that says a lot.

Permit me to mention StillDrinking's "Programming Sucks" rant again, both the original and the audio version, though I will warn you that it is NSFW. The most relevant part is this:

Peter Welch wrote:
The human brain isn’t particularly good at basic logic and now there’s a whole career in doing nothing but really, really complex logic. Vast chains of abstract conditions and requirements have to be picked through to discover things like missing commas. Doing this all day leaves you in a state of mild aphasia as you look at people’s faces while they’re speaking and you don’t know they’ve finished because there’s no semicolon. You immerse yourself in a world of total meaninglessness where all that matters is a little series of numbers went into a giant labyrinth of symbols and a different series of numbers or a picture of a kitten came out the other end.



_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: Plain English Programming - Another kick at the can.
PostPosted: Thu Mar 01, 2018 4:29 pm 
Offline
Member
Member
User avatar

Joined: Wed Oct 27, 2010 4:53 pm
Posts: 1150
Location: Scotland
Brendan wrote:
What we current have is:
  • Machines that are incapable of doing anything except manipulating numbers
  • Natural languages (e.g. English) that are so bad at describing how to manipulate numbers that humans invented a completely different language (mathematics)
  • Programming languages that (despite being heavily influenced by mathematics) suck at mathematics
  • Deluded fools that decided using mathmatics as a programming language is too hard, so they want to try something that's many orders of magnitude harder than "too hard"

We will soon have machines that are capable of understanding natural language. Natural languages can handle numbers well enough for those who are articulate, and if you are articulate and can't explain something in natural language, you don't understand it.

If I write a paragraph of instructions and then say underneath, "Do that ten times," (or say "Do this ten times," above the paragraph) the compiler will set up a count, adjust it after each loop, and stop looping when the count runs out. That is easy to understand, and just as efficient. Now, why would anyone think it's a mistake to do that? The same applies to a host of other things that could be done well in natural language, and once there's enough intelligence in the machine to cope with complexity and hide that complexity from the user, it will be much clearer and more compact than normal program source.


Solar wrote:
The customer usually does not have the skillset required to be precise enough for you to even ask the right questions about the things where the customer is ambiguous (i.e., everything).

That's certainly true today, but by making it easier for ordinary people to learn to program by not having to learn a programming language (which is something Plain English programming fails to do - it merely provides lots of cases where natural wording can be used, just so long as you use the right natural wording), more people will learn to communicate more clearly so that they can work with intelligent software to create programs that do what they need. Your job as a programmer is to find out what the customer wants, and if you can't find out what that is because he can't express himself sufficiently well, you can't write the program he wants, and nor can an intelligent machine. You could maybe write what you think he needs and hope he likes it, but if he doesn't and refuses to pay, you've wasted a lot of time and effort. An intelligent machine could create something that might suit his needs without any great cost, and once he sees it, he can say "I don't like the way it does that", or ask "How am I supposed to find such and such?" etc. The intelligent system can then modify the program again and again until the customer is happy. That will all be possible with AGI, but not before.


Quote:
Do we think in functions?

Yes, but most people do so in a muddled way which takes a lot of training to fix, and that training could all be done through natural language. The reason for using programming languages for that training today is simply that it produces visible results - machines can run their code and show whether it's correct or not, but they can't do that yet with natural language programs as no compiler can handle it.


Quote:
Or objects?

That can be hidden from them, but future programming systems needn't think in terms of objects. An object is just a package of variables and procedures designed to help people handle something that's complex for humans but easy for intelligent machines. Whatever leads to the best combination of efficiency and size, the machine will find that solution and implement it without caring about human programming fads.


Quote:
To repeat, the user of our software does not have this skill, nor is he interested in acquiring it, nor is he necessarily capable of acquiring it, nor is it necessary for him to acquire it.

And yet you are apparently able to work with him and produce what he needs. An intelligent system will be able to do the same. Its task, like yours, is to force the customer to spell out what he wants in sufficient detail for you to do the rest.


Quote:
You need someone who has that skill, and some basic knowledge of the business at hand, so you can work out what questions to ask. You need to bridge the gap between business domain and technical design of the software.

Whatever knowledge you need for that, the machine can acquire it too, but it needs full-blown AGI before the human programmer is made redundant.


Quote:
So the point at which actual implementation is done is not the customer, or the business side, it's people who have learned how to break down problems, and how to express design / implementation in an unambiguous way.

That breaking down of problems is not something that will automatically come with natural language programming as that capability only comes with AGI, so it will still depend on someone providing that skill for it. If the user can't do this, he will need to get help from a programmer, but there will be a lot of cases where a user who knows nothing of programming finds that he can manage to break down the task for the machine and produce the program he needs without calling in a programmer to help. Later on, as we get closer and closer to AGI, the expert in the machine will gain in capability and further reduce people's dependence on human programmers, and sooner or later they'll all be gone.


Quote:
At which point the benefits of a "Plain English" programming language rapidly diminishes.

Plain English programming isn't doing natural language and has no intelligent system tied to it, so its only gain over other programming languages is readability, and even that can be disputed. Natural language programming will initially be little better than that too, but it will grow in capability over time as the intelligent part of the system is added.


Kazinsal wrote:
...in order to make this even remotely feasible you would need a computer so intelligent it would be indistinguishable from a conscious human being, just with the backing of scores of teraflops of computational power.

It wouldn't pretend to have consciousness, so you might detect the difference, but it would certainly need to match the intelligence of humans, and indeed, high-performing ones at that if it's actually going to do any useful work. I don't know how much processing power and memory it would need, but my bet is that an ordinary laptop with a single processor and a gigabyte of RAM will be able to handle the task without appearing to be slow. Vision is hard and takes a lot of processing, but we don't need anything like that for this: thinking is a much simpler task, just so long as you have the right rules and have placed them into the right hierarchy - there is a lot less data needing to be crunched.


Quote:
You cannot just tell another person to do something and have them do it for you with no hesitation every time unless they want to, and forcing them to want to would be both incredibly difficult and incredibly immortal.

A machine with no "I" in it will simply work flat out for you without complaining, feeling nothing at all.


Schol-R-LEA wrote:
You know better than that; they can't manipulate numbers at all, they can only change sequences of electrical impulses which represent symbols, some of which happen to be numbers. The values - regardless of things such as 'data type' and so on, which the machines have no 'concept' of at all - exist entirely in our own interpretation of them.

So DavidCooper was even farther off than you said.

Data represents things, and the numbers in the machine are just symbols which represent those things. Sometimes they represent numbers; sometimes words; sometimes concepts. Whatever the brain does when crunching data, computers can match. Understanding things is merely a matter of getting ideas (concepts represented by data) into a state where they become compatible with the other data in the system rather than contradicting it, and wherever there are gaps left in that understanding, these are merely gaps in the understanding - it's no different for us, because we don't need to understand all the workings of the universe either to understand many of the things that happen within it. All our knowledge and understanding is built upon some gaps, but those gaps don't mean that it's impossible to determine the truth of arguments where the gaps have no relevance. Transformations and comparisons can operate on the data to reveal more knowledge that was hidden within it, and problems can be explored by simulating the external reality and looking for solutions there before applying them to the outside world, just as happens in our brains. We have a hierarchy of methods that we apply when trying to solve problems, and each person builds their own set of tools and their own rules about the order to try them in based on the kind of problem at hand. It's all just computation, with better performers having a better collection of better methods, and better rules about how to apply them. There is no barrier to computers matching our processing abilities. It's all done through manipulation of data that represents things.

_________________
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming


Top
 Profile  
 
 Post subject: Re: Plain English Programming - Another kick at the can.
PostPosted: Fri Mar 02, 2018 3:43 am 
Offline
Member
Member
User avatar

Joined: Thu Nov 16, 2006 12:01 pm
Posts: 7612
Location: Germany
DavidCooper wrote:
Solar wrote:
The customer usually does not have the skillset required to be precise enough for you to even ask the right questions about the things where the customer is ambiguous (i.e., everything).


That's certainly true today, but by making it easier for ordinary people to learn to program by not having to learn a programming language [...] more people will learn to communicate more clearly so that they can work with intelligent software to create programs that do what they need.


I bet you a month's salary that won't happen within our lifetime.

Quote:
Your job as a programmer is to find out what the customer wants, and if you can't find out what that is because he can't express himself sufficiently well, you can't write the program he wants, and nor can an intelligent machine.


The point is that I can (no, I have to) go through countless iterations of business analysis, applying a lifetime's experience in understanding language, including inflection, body language etc. to realize when and where the customer isn't really sure about what he's talking about, plus experience with the business area at hand. I am peppering him with questions, making damn sure he still feels flattered and in control. I talk to other people on the customer's side, to find out where they agree and where expectations don't match. I whip up examples and mockups, and we go through them, finding out if our understanding of the project matches.

I don't say that AI won't be able to do that at some point. What I am saying is, when we are "finished", what I have is not a program. What I have is some understanding of the high-level architecture (and I will need to come back later, when my understanding of the subject has deepened, and ask more questions).

I then apply my knowledge of available tools, libraries, and techniques to replace those mockups with proof-of-concept code, which I present back to the customer so we can find out if our ideas still match. Design gets corrected, code gets adapted, and so the work continues.

(If you're into Agile, shift the steps according to Agile principles, but you see what I mean.)

At no point is the customer involved, or even remotely interested, in programming. Providing an AI that can turn natural language into a program might change my work in this project, but not that of the customer.

So you'd need a two-step architecture -- one AI that turns natural language into a design, and another that turns that AI-created design into actual code. Perhaps put the two together into one framework, but you still see what I mean, right?

And due to the nature of things, that work would have to be done iteratively, because -- as everyone working in the business knows -- no final product looks like the first design draft, because (unless you're reimplementing idea X for the umpteenth time) you are discovering what needs to be done as you go, sometimes inventing techniques that simply were not there before.

Which means you'd have to sit down a customer, who isn't interested in becoming a programming expert, in front of a machine (instead of a person), and expect him -- who has no idea of, experience with, or intention to learn about the structured processes involved in turning a business requirement into software -- to spend his days with something he'll probably have as much love for as Dave had for HAL 9000.

Going through the rather frustrating process of refining an idea, thinking all the time "if the machine is so damn smart, why is it still coming back at me with more stupid questions".

Swearing profusely at your company every time a problem appears (and there will be problems, there always are), and he has to figure out some reply along the lines of "I'm sorry Dave, I am afraid I cannot do that" or "PC LOAD LETTER".

You'd have to create, not only a damn smart AI, but one that is also REALLY good at talking to people in an amiable way... probably including a "human" looking avatar to make the customer feel comfortable.

And when you're there, you'll realize that the programming language used by the "implementation" AI backend could just as well be Java or C++, because it doesn't matter either way. You haven't replaced a language with a better language, you've replaced the Software Engineer with a full-fledged, sapient, socially competent AI robot.

Not in our lifetime.

---

Bottom line, I do not share your optimism and faith in either the advancements of AI technology or the advancements of the human race. For the latter, a look at today's headlines should be enough to persuade you that, while global IQ levels might be rising for some reason, we're basically still apes beating our chests and bashing each other with sticks. :?

_________________
Every good solution is obvious once you've found it.


Top
 Profile  
 
 Post subject: Re: Plain English Programming - Another kick at the can.
PostPosted: Fri Mar 02, 2018 8:42 am 
Offline
Member
Member
User avatar

Joined: Tue Mar 06, 2007 11:17 am
Posts: 1225
By the way, the binary nature of computers is actually like a gear, the 1's and 0's are just like the teeth in a gear.

The machine is equally mechanical but it has so many gears and has enough programming flexibility to look like more than it.

In the surface it's the same despite of internally being an extremely complex electronics device.

_________________
Live PC 1: Image Live PC 2: Image

YouTube:
http://youtube.com/@AltComp126/streams
http://youtube.com/@proyectos/streams

http://master.dl.sourceforge.net/projec ... 7z?viasf=1


Top
 Profile  
 
 Post subject: Re: Plain English Programming - Another kick at the can.
PostPosted: Fri Mar 02, 2018 10:41 am 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
DavidCooper wrote:
If I write a paragraph of instructions and then say underneath, "Do that ten times," (or say "Do this ten times," above the paragraph) the compiler will set up a count, adjust it after each loop, and stop looping when the count runs out. That is easy to understand, and just as efficient. Now, why would anyone think it's a mistake to do that? The same applies to a host of other things that could be done well in natural language, and once there's enough intelligence in the machine to cope with complexity and hide that complexity from the user, it will be much clearer and more compact than normal program source.


Permit me to point out the presence of an anaphor ('this', which is implied but not stated to be the statement before or after the previous one - though technically the latter is a 'cataphor') in this. Anaphors, and indeed any implicit statement not fixed in syntax, requires requires the ability to interpret context - something that is really, really hard to do.

Note that it is sort of easy to fake it, as is done by Eliza-class chatbots such as Siri, Alexa and Cortana - they keep a record of the recent statements, and in the case of those three can also pass the conversation to a huge battery of remote servers with vast databases, allowing them to infer the most likely contextual values from a series of weighted comparisons.

all they are doing is converting the speech to their best-guess textual form, calling out to the server farms which filter through a huge number of possible meanings to find a match, which is then returned to the local system. It is a highly compute-intensive and I/O intensive process, not so much from the comparisons themselves as from the volume of data being sifted through. On their own, they are just chatbots with speech synthesis, and while Eliza famously fooled a lot of people, no actual intelligence is involved - it says more about human perception of intelligence than it does about how to create an artificial one. Most of those don't actually even have a neural network running on the local unit, and don't really even need one on the 'cloud servers' except for the comparison weighting steps.

Contrary to the hype, most of 'deep learning' is just throwing massively distributed algorithms like Mapreduce at techniques dating back to the 1980s and earlier - hell, perceptrons, which were the basis of all later neural network work (though the original model proved faulty, not even managing to be Turing-equivalent), were developed in 1957, at a time when high-level languages were an experimental concept, much if not most programming was done in hand-assembled machine code, the move to switch from vacuum tubes to transistors was just barely reaching the production stage, and the first core memory was still being tested at MIT (on the TX-0, which was built for that purpose and only got used as a general-purpose system after it was officially retired).

I won't say that an AI based on a linear-bounded automaton, or a collection of LBAs in parallel, is impossible, but we certainly aren't close to it now. Thing is, the human brain isn't an LBA, and in fact our brains mostly work due to things being 'hardwired' - we don't computed visual stimuli, we get it basically 'pre-processed' by the visual cortex before it goes to the frontal and lobe (and it goes to the amygdala first, for threat analysis, which often responds long before the 'conscious mind' has received the signal). We have a lot less awareness of, and agency over, our own actions than we think - most of what we think were our motives in things were interpretations made by the cognitive lobes after the fact. A large part of 'human intelligence' is basically internal self-misdirection - smoke and mirrors, with the magician and the audience being one and the same - which makes sense given that the reason it came to be wasn't thinking, but survival.

Which means that an Artificial Intelligence almost certainly won't resemble a natural one of the sort we are familiar with, unless it was created as an explicit simulation of one - which would basically mean throwing a lot of hardware-based neural networks at the problem (which can be implemented in ways other than an LBA, often far more efficiently), rather than solving it.

This also relates to why training is so key to programming skill, and why even an AI-backed natural language processor would have trouble with Plain English programming - humans are really bad at planning processes out explicitly and in detail. It isn't how our own brains work on a hardware level. Computers make bad humans, but humans also make bad computers - they way human brains work (or more often than not, only appear to) at a neurological level just isn't suited to it. It takes a lot of training and practice to get good at doing it, and in case you haven't noticed, most people who do it for any length of time go a bit crazy.

And getting back to anaphors: these work for people because our brains handle them 'in wetware', not by analyzing them in a series of steps (even in parallel). We do it well because it is something were are structured to do without conscious awareness. Computers just don't work like we do, and while it is possible to make something that simulates our brains (in principle anyway), doing that is massively compute-intensive and far less efficient and far more effort than just doing it in a way the computer can handle more readily.

(This is well-trod ground. The subject of anaphoric macros - a very, very limited application of anaphora in programming - is something that Lispers have been studying since the 1960s, and while it can be quite powerful it is also fraught with pitfalls. Paul Graham wrote about them extensively in On Lisp, as did Doug Hoyte in Let Over Lambda, and while they both really, really wanted to see them get greater use, they also admitted that they were often more trouble than they were worth. And this isn't even for anaphora in general - this is for the something that has been explicitly set up for anaphoric use ahead of time, and in some ways is really only imitating natural language anaphora as a coding shortcut. I intend to use them extensively in Thelema, but I am also sort of special-casing them to make it more accessible.

Real context-sensitive anaphora? We don't have any way to deal with those, for reasons that are as true now as they were when Chomsky came up with the theory of Language Hierarchies.)

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Last edited by Schol-R-LEA on Sat Mar 03, 2018 8:34 am, edited 1 time in total.

Top
 Profile  
 
 Post subject: Re: Plain English Programming - Another kick at the can.
PostPosted: Fri Mar 02, 2018 7:24 pm 
Offline
Member
Member
User avatar

Joined: Wed Oct 27, 2010 4:53 pm
Posts: 1150
Location: Scotland
Solar wrote:
I bet you a month's salary that won't happen within our lifetime.

I'm sure that whichever of us won such a bet wouldn't want to take the money off the other. Crohn's disease hasn't done wonders for my income, for a start. But let's just wait and see what happens (or in my case, work flat out to try to make it happen).

I have got enough of a system built that I should be able to put together a natural language programming package reasonably quickly covering the same ground as Plain English using actual natural language input instead (where all intelligible wordings of an instruction are valid), so I'm now putting time into designing a compiler for that. The parts to handle natural language are already written (built and developed for AGI, though only for one direction at the moment: interpretation rather than language production), so it's really just a matter of adding a compiler to it. In a month or two (or double/triple that if there are unexpected difficulties), I should have at least a partial demo of this, and by the end of the year it may be comparable to the Plain English package. After that, the AGI part will be able to play a greater role over time, and you'll see it start to automate parts of the programming process which currently depend on a human programmer. I have parts of that planned out in enough detail to know how they will work. How long it will take before it makes human programmers redundant though, I don't know, but it should be well within "our lifetime" (assuming average life expectancy).


Quote:
I talk to other people on the customer's side, to find out where they agree and where expectations don't match. I whip up examples and mockups, and we go through them, finding out if our understanding of the project matches.

And they could spend the same amount of time discussing their needs with a machine which would create instant mock-ups for them and near-instant working prototypes. If they already have software built by a system of this kind and want to add features to it, they'll just state what they want and the system will add the required functionality - programs will evolve out of older programs rather than being replaced, and data will be transformed automatically too as necessary to work with the new version. If something is missing from the package, the customer will be intelligent enough to notice and ask for it to be added, and if he wants to get rid of anything or change the way anything is done by the program, he will only have to say so for it to be implemented. He will not pay for a human programmer to psychoanalyse him as part of that process. You currently have to work the way you do because mistakes are enormously costly - you can't afford to redesign and rebuild everything repeatedly, but AGI will do just that. AGI will be able to take an operating system that sucks and transform it into something approximating the ideal OS just through conversations with people like Brendan: e.g. "Get rid of the legacy crud" --> "Wait a few minutes... Done". Experimentation with design will lose all of its current cost - a five year build to try out a new approach will be replaced with five minutes of crunching followed by the appearance of a full working system implementing whatever ideas are to be tested, even if the AGI already knows that the result will be crap because the human's judgement is poor and his ideas are a waste of time.


Quote:
At no point is the customer involved, or even remotely interested, in programming.

At no point will the customer realise that he is involved in the programming - he will just be telling the machine what he wants from it.


Quote:
So you'd need a two-step architecture -- one AI that turns natural language into a design, and another that turns that AI-created design into actual code. Perhaps put the two together into one framework, but you still see what I mean, right?

At every stage, actual code will be produced and demonstrated, so the designing becomes an evolutionary process with the program changing after each new requirement is requested.


Quote:
Going through the rather frustrating process of refining an idea, thinking all the time "if the machine is so damn smart, why is it still coming back at me with more stupid questions".

The questions are only going to be irritating in the early days of natural language when the intelligence isn't there to go with it. Later on, it will be more normal for the machine to make a guess as to what's wanted and to make the program function in that way, so the question asked will usually be more like, "Is this what you have in mind?"


Quote:
And when you're there, you'll realize that the programming language used by the "implementation" AI backend could just as well be Java or C++, because it doesn't matter either way. You haven't replaced a language with a better language, you've replaced the Software Engineer with a full-fledged, sapient, socially competent AI robot.

There is no point in sticking an ugly, unnecessary step of that kind into the process. AGI will simply design the code in its own internal language of thought and then implement it directly in machine code for whatever kind of processor and machine architecture it is to be run on (on an individual machine basis).


Quote:
For the latter, a look at today's headlines should be enough to persuade you that, while global IQ levels might be rising for some reason, we're basically still apes beating our chests and bashing each other with sticks. :?

My plan (if I can get it working) is to hand my AGI system over to GCHQ and establish a commercial arm of it there aimed at putting as many people out of work as possible all round the world, while the money raised, instead of going into any owner's pockets (I don't want millions/billions/trillions, and I won't let anyone else buy a claim over it either) it will go into a fund to be distributed as a basic income for everyone on the planet, or rather, for those who behave well (e.g. not endorsing and propagating hate). People in countries run by vicious dictators will not receive their share (as it would just be stolen off them), but it will be saved up for them. We must use AGI to change the world for the better and to make it safe. My plan will reward movement in the right direction, and I hope anyone else who manages to build AGI, whether in the US, Russia, China or elsewhere, has the same kind of idea about what should be done with it. Most importantly though, whoever has it first will take such a lead that they will never be caught up with, so that person has to do what's right for all mankind and not throw everything away by handing it over to a despot.

_________________
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming


Top
 Profile  
 
 Post subject: Re: Plain English Programming - Another kick at the can.
PostPosted: Fri Mar 02, 2018 7:52 pm 
Offline
Member
Member
User avatar

Joined: Wed Oct 27, 2010 4:53 pm
Posts: 1150
Location: Scotland
Schol-R-LEA wrote:
Permit me to point out the present of an anaphor ('this', which is implied but not stated to be the statement before or after the previous one - though technically the latter is a 'cataphor') in this. Anaphors, and indeed any implicit statement not fixed in syntax, requires requires the ability to interpret context - something that is really, really hard to do.

Let's suppose you have two paragraphs of instructions with "Do that ten times" written in between them. The compiler might put in a coloured line up the left of the page starting from the "Do that ten times" line and running up to the first line of the paragraph above to show how it has interpreted the instruction. If the wording is "Do this ten times", it's less certain which paragraph is intended to become a loop, but the line can be drawn down from the "Do this ten times" line to the bottom of the paragraph below, and if the programmer didn't intend that, he can say so and have the program corrected (without necessarily changing the wording). Of course, if the paragraph below hasn't been written yet when the "Do this ten times" is written under the first paragraph, it will likely be linked to the existing paragraph, which means the programmer will soon learn to write "Do this ten times:-" or "Do the following ten times" instead in such situations.


Quote:
Real context-sensitive anaphora? We don't have any way to deal with those, for reasons that are as true now as they were when Chomsky came up with the theory of Language Hierarchies.)

If you realise that you're talking to a machine that can't see what you're pointing at, you'll use words like "this" and "that" in ways where you think the machine will agree on which is most likely the intended meaning. If it sounds clear to you, it will sound clear to the machine, just so long as the machine is running the same rules. We manage to learn the rules, so there's no reason to imagine that it's going to be impossible for machines to do so too. When dealing with any kind of ambiguity, you have to take all valid interpretations and score them for probability based on what makes the most sense, and determining what makes the most sense requires you to have approximately the same minimum level of intelligence as the person communicating with you. That's why it's "hard for machines" today, and that's also why it won't be hard for machines "tomorrow".

_________________
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming


Top
 Profile  
 
 Post subject: Re: Plain English Programming - Another kick at the can.
PostPosted: Sat Mar 03, 2018 7:43 pm 
Offline
Member
Member

Joined: Mon Jul 25, 2016 6:54 pm
Posts: 223
Location: Adelaide, Australia
The problem with natural language programming, in my view, is that is solves a non-problem while purporting to solve a significant problem. The reason normal people don't program is not that they don't understand the language programs are written in, it's because they don't understand how computers work, and more importantly, they don't understand how to instruct a computer to solve problems for them.
Even if you can swap "for(int i = 0; i < 10; i++)" for "do the following thing 10 times" you are not one single step closer to being able to tell Siri "make an inventory management system which predicatively orders stock for my 30 stores". This requires a strong AI, something capable of deeply understanding the problem space.
At this point in time, this is as much sci-fi as faster than light travel or fusion energy, theoretically plausible, but not attainable with the technology available to us.
Think about this, if you had an AI which could even understand the concept of "legacy cruft" let alone refactoring and re-engineering an OS to remove it, you would have an AI capable of running a country, or commanding an army, or designing a CPU. Literally the singularity.


Top
 Profile  
 
 Post subject: Re: Plain English Programming - Another kick at the can.
PostPosted: Sun Mar 04, 2018 1:51 pm 
Offline
Member
Member
User avatar

Joined: Wed Oct 27, 2010 4:53 pm
Posts: 1150
Location: Scotland
StudlyCaps wrote:
Even if you can swap "for(int i = 0; i < 10; i++)" for "do the following thing 10 times" you are not one single step closer to being able to tell Siri "make an inventory management system which predicatively orders stock for my 30 stores". This requires a strong AI, something capable of deeply understanding the problem space.

That's something we agree on - you can have full natural language capability without it making any difference to the difficulty of programming because all it does is let you word your instructions in a greater variety of ways. You need the to have the AGI part as well if the system is going to solve problems for the programmer, and you could have that without any natural language capability at all, so they are two different things. But in the course of developing AGI, you need to solve all the biggest problems in linguistics anyway (which relate to semantics and the ability to relate all words and concepts to each other correctly), and by the time you've done that work, it is trivial to add natural language at the surface level.


Quote:
At this point in time, this is as much sci-fi as faster than light travel or fusion energy, theoretically plausible, but not attainable with the technology available to us.

It's easy to make that kind of guess - I made it about the difficulty of vision, but just yesterday I saw a demonstration of a self-driving car which was using a mobile phone to do all the visual processing. We tend to make incorrect assumptions about the difficulty of the task based on the difficulty of working out how to perform that task and on looking at the amount of hardware that the human brain throws at the problem, but there are birds with much smaller brains which seem to have vision better than ours, and insects that do a pretty good job of it too despite having a brain the size of a pin head.


Quote:
Think about this, if you had an AI which could even understand the concept of "legacy crud" let alone refactoring and re-engineering an OS to remove it, you would have an AI capable of running a country, or commanding an army, or designing a CPU. Literally the singularity.

Indeed, but that's the end goal and not the beginning. AGI needs to be built rule by rule, and as the number of rules grows and the hierarchy of those rules evolves to apply them in different orders, the capability of the system will grow, automating more and more parts of the programming process until it closes in on what we are able to do. We won't go from having nothing to having the whole package in one step, but adding the rules needn't be a terribly slow process. All human programmers go through a learning process where they acquire rules for solving problems, but they don't all learn the same rules or apply them in the same order, which leads to some performing better than others. With machines, it'll be much easier for them to experiment with applying the rules differently to find out which arrangement of them leads to the best results and the shortest time taken required to get there. Think about the AIs playing games like chess and go where they demonstrate how that flexibility leads to rapid improvements in performance that leave people gasping in disbelief. Once you have a system that can learn and experiment in this way, it may be able to do the rest in a matter of hours or minutes. I expect the same to happen with AGI, just as it happens with children when they hit a certain level of understanding and soon reach the point where they can build anything, except that machines are inordinately faster learners.

_________________
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming


Top
 Profile  
 
 Post subject: Re: Plain English Programming - Another kick at the can.
PostPosted: Mon Mar 05, 2018 6:42 pm 
Offline
Member
Member

Joined: Mon Jul 25, 2016 6:54 pm
Posts: 223
Location: Adelaide, Australia
Obviously nobody can predict the future, even in a few years of progress we have things that no one saw coming, and things that seemed certain that just never happened.
That said, one thing I disagree with; I don't think a rules based expert system will ever be complex enough to create a useful program from scratch. Modern AIs powering things like Siri and Googles Go computer are not rules based systems, they're neural network based.
Neural networks contain no rules as such, no combination of neurons consists a stand alone "rule". Because of this, the system must be trained, and as such can only ever be thought of as a filter, given an input, it produces an output which optimally replicates transformations seen in training data. Importantly, humans cannot understand the rules, they can't improve the system by adding more information, and eventually overtraining actually decreases the systems performance.
This is, I think, important because humans do not learn in that way. No animal does. I would not even call this process of training to replicate results learning. Learning in a human context is about understanding, despite beating a grand master the Google Go computer does not know what Go is, or why it plays. In my mind this indicates that modern AI algorithms, which enable the many amazing things computers can do today, are simply not capable of real thought.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 26 posts ]  Go to page Previous  1, 2

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 21 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group