OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Mar 28, 2024 10:57 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 144 posts ]  Go to page Previous  1 ... 6, 7, 8, 9, 10  Next
Author Message
 Post subject: Re: Why free software is bad
PostPosted: Fri May 27, 2016 11:45 am 
Offline
Member
Member
User avatar

Joined: Wed Oct 27, 2010 4:53 pm
Posts: 1150
Location: Scotland
Brendan wrote:
The AGI will know that it was created and didn't evolve; and therefore it would be logical for the AGI to assume that humans were also created and didn't evolve.

The idea is to create AGI rather than AGS (artificial general stupidity). The first designer has to evolve. Humans may not be the first designer and may have been designed by the first or a later designer, but whatever the first designer is it had to acquire its ability to design through evolution: i.e. natural creation by accumulation of lucky accidents rather than magical creation with perfect, complete GI functionality which was never designed.

Quote:
You're saying AGI will exist (despite researchers spending 50+ years to completely fail to do anything even slightly close),

It's easy for people to flounder around a problem for a long time without making a dent in it before someone finds the right line of attack to unlock it. Most of the people working on the problem have studied the wrong things, while most of the ones who have an extensive knowledge of linguistics have studied the wrong linguistics because of the mess that Chomsky made which has forced them all to waste years studying something far too complex for what it actually does and far too shallow in scope, and many teams today are still wrestling with that horrific mess. If they manage to untangle it and get past the linguistics barrier, the rest will come together very quickly.

Quote:
and that it will be smarter than the collective intelligence of groups of humans (and not like a drunk child),

A calculator will beat any number of people at arithmetic and whatever mathematical functions it's designed to perform.

Quote:
and that it will have an impossible ability to obtain biased information and "un-bias" it,

Everything would be framed with probabilities based on source reliability and the degree to which the data fits in with other knowledge or goes against it.

Quote:
and that it won't be so expensive (to build and maintain) that it will have to be owned and run by a large organisation (e.g. a government, google) with their own motives/agenda,

Why should it be expensive? There will be billions of independent libraries of data all comparing notes with each other over the Net, each owned by an individual person and sitting in a machine which they can carry around with them, and there will be thousands of bigger libraries collecting all that stuff to get hold of the full picture, again comparing notes with each other.

Quote:
and that people will actually listen to this super-human figment of your imagination without dismissing it as a flawed joke (like I already do),

The people who ignore it will be the joke.

Quote:
and that somehow the wars caused by AGI (by people that want to steal/control it, people that want to stop it, and people that disagree with it) will be more fun than the wars we have now.

It's those wars that are the major threat, but backing away from the whole thing simply leaves the way clear to the people who want to create AGI for bad purposes to take control of everything. You are effectively arguing that we should sit back and let them do just that instead of trying to win that war in the least destructive way by getting in first.

Quote:
Let me be perfectly blunt here: this is not a serious discussion, you are a crackpot. Not one single part of your delusion is even slightly plausible.

I'm always happy when people make claims that will make them look more than a little silly further down the track.

Quote:
The reverse is far more likely (that AGI will invent ludicrous religious ideas of its own, and pass those ideas on to people that are so stupid that they think AGI is infallible).

Why would 100% rational AGI invent religious ideas at all? Everything it will do will be framed through probabilities except where it can prove something within a system of rules and make an absolute pronouncement on that such as "if these rules are correct, then X must be true". You need to get it through your head that AGI != AGS and stop attacking the idea of the former on the basis that it is the latter.

_________________
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming


Top
 Profile  
 
 Post subject: Re: Why free software is bad
PostPosted: Fri May 27, 2016 12:52 pm 
Offline
Member
Member
User avatar

Joined: Wed Oct 27, 2010 4:53 pm
Posts: 1150
Location: Scotland
Solar wrote:
Without getting into any further detail regarding your arguments, let it be said that your "vision" scares the s*** out of me, and that I would actively oppose any initiative to put this vision into reality, because I believe your "solution" would end up being far worse than any of the problems it wants to solve.

You think it doesn't scare me too? It's highly likely that we're reaching the point where any intelligent species wipes itself out by creating technology which can get out of control and turn on its creator, but backing away is no solution because the worst people will certainly not stop developing it and they'll just be handed more power if the good people shut their programs down. The safest route would be for all governments to recognise the danger now and to work together to ensure that safe AGI gradually takes over under their collective control while they check it carefully at every step to make sure it isn't going to go crazy (and using more than one independently-designed system of AGI so that they only act on anything that all those systems agree on), but that would depend on getting all the psychopaths running nuclear powers to give up their power voluntarily first, and I don't think that's going to happen any time soon.

Those psychopaths will, if they remain in power, provide protection for the development of lethal AGI and devices to install it in which will be able to deliver death to all their enemies, or they'll use it to help develop genetic weapons which can be released anywhere without anyone knowing who's to blame, potentially making it impossible to know who to strike back against as there may be more than one regime of that kind to point the finger at. A large attack would likely lead to a worldwide nuclear conflict, but a small one probably would not, which means that Dr. Evil (any psycopath running a country with nuclear weapons) could launch a series of small, localised attacks without any negative consequences for him, and those small attacks could add up over time into complete conquest of the world: he would be prepared to take the risk of nuclear war breaking out each time because it's highly unlikely that the world's suicide button would be pressed to defeat him after any small attack as everyone would have the hope that he might die soon and be replaced by someone sane who will seek peace instead.

I think that's the direction things will go in if we don't use AGI to take over the world early on, but AGI could identify the right people to work with in each despotic regime so that it can be done with only a few shots being fired. I certainly wish you luck with getting all the world's powerful despots to give up power voluntarily so that this isn't necessary, because that would take all the pressure off and make things easy.

_________________
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming


Top
 Profile  
 
 Post subject: Re: Why free software is bad
PostPosted: Fri May 27, 2016 1:24 pm 
Offline
Member
Member

Joined: Wed Jun 17, 2015 9:40 am
Posts: 501
Location: Athens, Greece
Hi,


trident wrote:
glauxosdever wrote:

Security

Free software as in freedom means that everyone has the right to view, edit and republish the code. Viewing the code allows malicious hackers to find flaws and exploit them. Editing and republishing the code means that these hackers can ask infected users to pay to get their own fixes. These fixes are usually of questionable quality too.

There are also several instances of free software publishers that have advertisements waiting to trick users into clicking them. Additionally, they usually provide installers that will install adware and spyware without the user's consent. I will not get into enumerating these malicious websites, though.



Why you not post this in mailing list openbsd-misc?
Could you please check the date I posted it? If it doesn't seem right for your timezone, check the next posts.

It's fun fooling people with an April Fools post after two months. But it's not fun debating over irrelevant things, while you could do useful work, or even help others do useful work. I'd be happy if this topic would finally get idle from posting.


Regards,
glauxosdever


Top
 Profile  
 
 Post subject: Re: Why free software is bad
PostPosted: Fri May 27, 2016 2:06 pm 
Offline
Member
Member
User avatar

Joined: Wed Jul 13, 2011 7:38 pm
Posts: 558
Welcome to OSDev.org, where nobody looks at post dates before replying, people argue about the dumbest things, and the search button is so hard to find it's on the "lost wonders of the osdev world" along with the rewrite of The Tutorial That Shall Not Be Named and a working copy of Brendan's OS.


Top
 Profile  
 
 Post subject: Re: Why free software is bad
PostPosted: Fri May 27, 2016 4:17 pm 
Offline
Member
Member

Joined: Thu Mar 25, 2010 11:26 pm
Posts: 1801
Location: Melbourne, Australia
Kazinsal won quote of the year when he wrote:
Welcome to OSDev.org, where nobody looks at post dates before replying, people argue about the dumbest things, and the search button is so hard to find it's on the "lost wonders of the osdev world" along with the rewrite of The Tutorial That Shall Not Be Named and a working copy of Brendan's OS.
I genuinely hope that nobody believes the garbage in this thread.

Brendan made the understatement of the year when he wrote:
Let me be perfectly blunt here: this is not a serious discussion, you are a crackpot. Not one single part of your delusion is even slightly plausible.
DavidCooper, do you realise that you sound like a religious/cult leader? Asking us to put our faith in the almighty (AGI) with the promise that it will be fair to all. That we'll all be happy and never have to work again. We've been hearing stories like this for thousands of years.

_________________
If a trainstation is where trains stop, what is a workstation ?


Top
 Profile  
 
 Post subject: Re: Why free software is bad
PostPosted: Fri May 27, 2016 5:23 pm 
Offline
Member
Member
User avatar

Joined: Wed Oct 27, 2010 4:53 pm
Posts: 1150
Location: Scotland
gerryg400 wrote:
DavidCooper, do you realise that you sound like a religious/cult leader? Asking us to put our faith in the almighty (AGI) with the promise that it will be fair to all. That we'll all be happy and never have to work again. We've been hearing stories like this for thousands of years.

I'm not promising you anything except that there will be AGI (barring a massive natural catastropy or nuclear war getting in first) and it may kill you or it may be kind to you, but the likeliness of the former will be greatly higher if we just make it freely open to all the wrong people. If you want to ignore that and be shafted by the owner of evil AGI, then go ahead and congratulate yourself on helping that to come about by attacking what I'm saying in a wholly unconstructive way. The best case scenario is that good AGI wins out and that it will liberate us all from pointless toil while sharing out resources fairly. If you are so sure that that won't happen that you want to ridicule anyone who wants to try to bring it about, then I can't find the right expletives to direct at you.

_________________
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming


Top
 Profile  
 
 Post subject: Re: Why free software is bad
PostPosted: Fri May 27, 2016 6:55 pm 
Offline
Member
Member

Joined: Thu Mar 25, 2010 11:26 pm
Posts: 1801
Location: Melbourne, Australia
DavidCooper wrote:
I'm not promising you anything except that there will be AGI (barring a massive natural catastropy or nuclear war getting in first) and it may kill you or it may be kind to you, but the likeliness of the former will be greatly higher if we just make it freely open to all the wrong people. If you want to ignore that and be shafted by the owner of evil AGI, then go ahead and congratulate yourself on helping that to come about by attacking what I'm saying in a wholly unconstructive way. The best case scenario is that good AGI wins out and that it will liberate us all from pointless toil while sharing out resources fairly. If you are so sure that that won't happen that you want to ridicule anyone who wants to try to bring it about, then I can't find the right expletives to direct at you.
Sounds legit.

AGI will lead us to the promised land. Hmm. Where have I heard that before?

_________________
If a trainstation is where trains stop, what is a workstation ?


Top
 Profile  
 
 Post subject: Re: Why free software is bad
PostPosted: Sat May 28, 2016 2:29 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

DavidCooper wrote:
Brendan wrote:
The AGI will know that it was created and didn't evolve; and therefore it would be logical for the AGI to assume that humans were also created and didn't evolve.

The idea is to create AGI rather than AGS (artificial general stupidity). The first designer has to evolve. Humans may not be the first designer and may have been designed by the first or a later designer, but whatever the first designer is it had to acquire its ability to design through evolution: i.e. natural creation by accumulation of lucky accidents rather than magical creation with perfect, complete GI functionality which was never designed.


You're allowing your own bias and/or wishful thinking (and/or a desire for your fantasy machine to produce the same answer you would have) to destroy logic.

Consider the question "If a Foo is like a Bar and a Bar is definitely pink, what colour is a Foo?". A logical machine would assume that because a Foo and a Bar are similar the most likely answer is that Foo and a Bar are both pink.

Now consider the question "If AGI shares multiple characteristics with Humans (both a type of intelligent entity) and the first AGI was definitely designed and created, where did the first Human come from?". This is the same as the last question - a logical machine would assume that because AGI and Humans share multiple characteristics the most likely answer is that the first AGIs and the first humans were both designed and created.

Of course if AGI believes biased information, then you can tell the AGI that humans evolved, or that humans have always existed, or that humans don't exist (and are just AGI machines pretending to be "biological"), or whatever else you like, and the AGI will believe you. However, you're attempting to pretend that your AGI won't believe biased information.

DavidCooper wrote:
Quote:
You're saying AGI will exist (despite researchers spending 50+ years to completely fail to do anything even slightly close),

It's easy for people to flounder around a problem for a long time without making a dent in it before someone finds the right line of attack to unlock it. Most of the people working on the problem have studied the wrong things, while most of the ones who have an extensive knowledge of linguistics have studied the wrong linguistics because of the mess that Chomsky made which has forced them all to waste years studying something far too complex for what it actually does and far too shallow in scope, and many teams today are still wrestling with that horrific mess. If they manage to untangle it and get past the linguistics barrier, the rest will come together very quickly.


Pure bullshit. Why not create intelligence, then let it learn languages and linguistics the same way people do (and then ask your AGI machine how to get past the "linguists are too stupid to see that languages have everything to do with communication/IO and nothing to do with intelligence" barrier)?

DavidCooper wrote:
Quote:
and that it will be smarter than the collective intelligence of groups of humans (and not like a drunk child),

A calculator will beat any number of people at arithmetic and whatever mathematical functions it's designed to perform.


A calculator gives accurate results quickly because it's not intelligent.

DavidCooper wrote:
Quote:
and that it will have an impossible ability to obtain biased information and "un-bias" it,

Everything would be framed with probabilities based on source reliability and the degree to which the data fits in with other knowledge or goes against it.


So it will have the impossible ability to obtain biased information and construct "unbiased probabilities" from it (where the resulting "impossibly unbiased probabilities" are then applied to more biased information to obtain "impossibly unbiased information")?

There are 4 people in the same room. You ask all 4 people what the temperature is inside the room. 3 of the people collude and deliberately give you the same false answer ("very cold"). One person tells you the truth ("nice and warm"). You don't know that 3 people have colluded - you only know that you've got 3 answers that are the same and one that isn't. Which answer do you believe?

There are 4 people in the same room, and every day you ask them what the temperature is inside the room. 3 of the people have colluded and decided to always lie and always tell you an answer that is 20 degrees colder than it actually is; and their answers are always very close together (even when you know it's impossible for them to have talked to each other that day). One person is always telling you the truth, so their answer is always different from everyone else's. You don't know about the collusion. How do you determine your "probabilities based on source reliability" to ensure that you're not believing the liars?

DavidCooper wrote:
Quote:
and that it won't be so expensive (to build and maintain) that it will have to be owned and run by a large organisation (e.g. a government, google) with their own motives/agenda,

Why should it be expensive? There will be billions of independent libraries of data all comparing notes with each other over the Net, each owned by an individual person and sitting in a machine which they can carry around with them, and there will be thousands of bigger libraries collecting all that stuff to get hold of the full picture, again comparing notes with each other.


Will it also fly faster than a speeding bullet, and make hamburgers appear out of thin air on request?

This is just more "it doesn't exist in practice, therefore there's no practical limits to my wishful thinking" nonsense.

DavidCooper wrote:
Quote:
and that somehow the wars caused by AGI (by people that want to steal/control it, people that want to stop it, and people that disagree with it) will be more fun than the wars we have now.

It's those wars that are the major threat, but backing away from the whole thing simply leaves the way clear to the people who want to create AGI for bad purposes to take control of everything. You are effectively arguing that we should sit back and let them do just that instead of trying to win that war in the least destructive way by getting in first.


No, I'm saying that even if your pipe-dream AGI nonsense was a proven reality, it wouldn't decrease the number or severity of wars and would actually increase the number and/or severity of wars.

DavidCooper wrote:
Quote:
Let me be perfectly blunt here: this is not a serious discussion, you are a crackpot. Not one single part of your delusion is even slightly plausible.

I'm always happy when people make claims that will make them look more than a little silly further down the track.


This reminds me of a guy that claimed his OS would have AI that would auto-transform software into something compatible with his OS. I bet he feels silly now that it's 5 or 6 years further down the track.

DavidCooper wrote:
Quote:
The reverse is far more likely (that AGI will invent ludicrous religious ideas of its own, and pass those ideas on to people that are so stupid that they think AGI is infallible).

Why would 100% rational AGI invent religious ideas at all? Everything it will do will be framed through probabilities except where it can prove something within a system of rules and make an absolute pronouncement on that such as "if these rules are correct, then X must be true". You need to get it through your head that AGI != AGS and stop attacking the idea of the former on the basis that it is the latter.


If it's AGI (and not AGS) then it's capable of finding a solution that it was not given (e.g. you give it a false dilemma and it rejects the given solutions and "invents" its own solution). For example, if you ask it "Is three multiplied by two equal to 4 or 8?" it might say "Neither, three multiplied by two equals 6". For example, you ask it "Where did the first humans come from, were they created by one or more God/s or evolved from simpler life forms?" and it might say "Neither, they came from ....."


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Why free software is bad
PostPosted: Sat May 28, 2016 3:50 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
gerryg400 wrote:
do you realise that you sound like a religious/cult leader?

Do you realize why people believe in religion?
gerryg400 wrote:
Asking us to put our faith in the almighty (AGI) with the promise that it will be fair to all. That we'll all be happy and never have to work again. We've been hearing stories like this for thousands of years.

In the end there will be some AI, that is more intelligent than you or I or anybody else. So, why it can't help us a bit? It can't help if it's an enemy. Or if we are irrelevant for it's goals. But if it was our creature, then it's probable that we would manage to somehow install some means of control over the beast. In the latter case, why it's impossible for it to be fair to all?

David Cooper believes in the better outcome. The worst outcome is also possible. And David tells you that the worst is more probable if you deny the idea it is possible. And you just deny any idea related to AI. Because every such idea will look as a fairy tale (with a good or with a bad end).

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: Why free software is bad
PostPosted: Sat May 28, 2016 4:07 am 
Offline
Member
Member

Joined: Thu Mar 25, 2010 11:26 pm
Posts: 1801
Location: Melbourne, Australia
embryo2 wrote:
In the end there will be some AI, that is more intelligent than you or I or anybody else.
There is no such thing as intelligence. What on earth are you talking about?

_________________
If a trainstation is where trains stop, what is a workstation ?


Top
 Profile  
 
 Post subject: Re: Why free software is bad
PostPosted: Sat May 28, 2016 11:20 am 
Offline
Member
Member
User avatar

Joined: Wed Oct 27, 2010 4:53 pm
Posts: 1150
Location: Scotland
Brendan wrote:
You're allowing your own bias and/or wishful thinking (and/or a desire for your fantasy machine to produce the same answer you would have) to destroy logic.

Consider the question "If a Foo is like a Bar and a Bar is definitely pink, what colour is a Foo?". A logical machine would assume that because a Foo and a Bar are similar the most likely answer is that Foo and a Bar are both pink.

A logical machine would, in the absense of any other information on this, determine that there was a higher chance of a Foo being pink than it would without the information about a Foo being like a Bar and a Bar being pink unless the information is coming from a source that is likely trying to mislead it, in which case it will determine that there's less chance of a Foo being pink rather than more, unless the information is comming from a cunning source that may engage in a double bluff, in which case it may not change any probability it has attached to the idea of a Foo being pink at all. As I said before, this is about AGI and not AGS.

Quote:
Now consider the question "If AGI shares multiple characteristics with Humans (both a type of intelligent entity) and the first AGI was definitely designed and created, where did the first Human come from?". This is the same as the last question - a logical machine would assume that because AGI and Humans share multiple characteristics the most likely answer is that the first AGIs and the first humans were both designed and created.

If that is all the AGI system knows, then at that point it will determine that there is a possibility that the first human was designed and created. As it receives more data it will also determine that there's a possibility that humans evolved without any intelligent designer, but it won't rule out the possibility of there being an intelligent designer which dictated the entire process. In this particular case, so long as evolution looks like a possible mechanism for humans being created, it's so unknowable that there's no way to put a proper value to the probability either way because we can't measure how many human-like species were created by hidden designers and how many actually evolved: we only have access to one case and we don't know how it came into being.

Quote:
Of course if AGI believes biased information, then you can tell the AGI that humans evolved, or that humans have always existed, or that humans don't exist (and are just AGI machines pretending to be "biological"), or whatever else you like, and the AGI will believe you. However, you're attempting to pretend that your AGI won't believe biased information.

AGS might believe you and put 100% in as a probability where a value less than 100 should be used, but AGI won't make such unacceptable jumps of assumption. Many people have NGS (natural general stupidity) and believe all manner of things they're told without checking, but a few have NGI and question everything. People with NGS also get stuck with their beliefs, so when they start to generate contradictions, they don't rethink everything from scratch, but take the lazy way out and just tolerate the contradictions instead while telling themselves that contradictions are okay. AGI will not tolerate contradictions, so as soon as a contradiction is generated it will know that there is a fault in the data and it will hunt it down. It may not be able to work out which part of the data is faulty, but it may be able to identify which parts could be wrong and it could then ask for more data related to those parts in an attempt to resolve the issue.

Quote:
Pure bullshit. Why not create intelligence, then let it learn languages and linguistics the same way people do (and then ask your AGI machine how to get past the "linguists are too stupid to see that languages have everything to do with communication/IO and nothing to do with intelligence" barrier)?

Language is for communication, but it maps to thought and they are similarly constructed, though thoughts are networks of ideas while spoken language has to be linear, having lots of ways of following different branches of ideas sequentially. (It is possible that thoughts are stored as linear data too, and they are in my AGI system, but the structuring is different, removing all the mess of the natural language forms.) To get an understanding of how thinking works, language is a good starting point. It's possible to start without it too though, and if you do that you'll be building up from mathematics and reason. We have all the necessary maths and reasoning worked out already, but what happens when you want to fill the machine with knowledge presented to it through human language? Without solving all the linguistics problems (grammatical and semantic), you can't bridge the gap. A lot of thinking is done using high-level concepts without breaking things down to their fundamental components, so even if you are trying to build AGI without considering language at all, you're still going to be using most of the same concepts, and studying language is a shortcut to identifying them. If you were working with vision and ignoring language, you'd still be labelling lots of identifiable parts and then checking to see how they're arranged to see if there's a compound object made out of many of those parts which could match up with a concept that represents that kind of compound object, and when you add language to the system later on you will then assign a name to it to replace the unspeakable coding that's used in thought. Many of the things AGI will need to do involve simulation: it isn't enough just to work with concepts, but you have to be able to generate representations of things in a virtual space and imagine (simulate) interactions between them, or analyse alignments. It would be fully possible to develop AGI in such a way that you start with mathematics and reason, then add this simulation, then add machine vision, and only add language at the end of the process, but you'd be missing a trick, because what's really going to guide you in this is the analysis of your own thinking, asking yourself, "how do I work that out?" In that kind of study, you're working with thoughts which are already very close to language, and when you write notes about what you're working out, you do all of that through language too, translating the thoughts into language to record them. Studying thought is the way to make progress, and studying language is the best way to get a handle on what thought does: the aim is to see through language to the deep structures of the actual thoughts that lie below the surface.

Quote:
A calculator gives accurate results quickly because it's not intelligent.

A calculator displays some components of intelligence. A reasoning program that can solve logic problems where all the complexities of language have been removed also displays some components of intelligence, but for it to solve real problems it needs a human to convert them into a form that it can handle. An AGI system will be able to do the whole task without the human cutting through the linguistics barrier for it every time.

Quote:
So it will have the impossible ability to obtain biased information and construct "unbiased probabilities" from it (where the resulting "impossibly unbiased probabilities" are then applied to more biased information to obtain "impossibly unbiased information")?

During the war between the Russians and Mujahideen in Afghanistan, the Russians put out biassed news about it on Radio Moscow (which they broadcast around the world). Whenever they said they'd killed a hundred Mujahideen fighters, they'd actually killed ten. Whenever they said ten of their own troops had been killed, the real number was a hundred. When you understand the bias and the algorithms used to generate it, you can unpick them and get close to the truth. You don't just guess what the bias might be though: you look at independent information sources and try to identify patterns. The Mujahideen were applying the same scale of bias in the opposite direction, which allowed you to correct all their figures too, and the adjusted scores from both sides matched up very well. Sometimes there was a BBC journalist with a Mujahideen group too who was providing unbiassed data on the number of deaths that had actually taken place in an incident, and this further confirmed the bias algorithms that were being applied by both sides.

Quote:
There are 4 people in the same room. You ask all 4 people what the temperature is inside the room. 3 of the people collude and deliberately give you the same false answer ("very cold"). One person tells you the truth ("nice and warm"). You don't know that 3 people have colluded - you only know that you've got 3 answers that are the same and one that isn't. Which answer do you believe?

You apply probabilities to it and attempt to work out why there is a mismatch in the data. AGI is not a belief machine like AGS, but a probability machine. If the three people are smirking at each other, that increases the probability that they are lying. If they are all shivering, you determine that it's likely that they are telling the truth, or that those three may have a fever.

Quote:
There are 4 people in the same room, and every day you ask them what the temperature is inside the room. 3 of the people have colluded and decided to always lie and always tell you an answer that is 20 degrees colder than it actually is; and their answers are always very close together (even when you know it's impossible for them to have talked to each other that day). One person is always telling you the truth, so their answer is always different from everyone else's. You don't know about the collusion. How do you determine your "probabilities based on source reliability" to ensure that you're not believing the liars?

If you know nothing else about them and are incapable of recognising any signs that they are lying, then you would put a high probability on them being the ones who are right and a high probability on the fourth person having an unusual physiology. If you have other data about these people in other situations though and know that there aren't any such mismatches in other situations, you can determine that it's unlikely that there's anything physiologically unusual about the fourth person (unless there's something unique about the room which might trigger it). You would then put a high probability on there being some bad information being provided, and the probability as to whether the three are lying or the one is lying would need to be calculated by the proportion of other similar cases favouring the group or individual as the prime suspect. You would then try to find an alternative way to measure the temperature in the room so that you can resolve the question. Note that I used the word "you" in that even though I was describing what the AGI would do, but there's a good reason for that: the AGI system would do the same things as an intelligent person would do in its attempt to resolve the mystery, and when it's starved of other data, it will make the same initial conclusion that the three people are more likely to be telling the truth because their data is better matched, but crucially it will be right because it is assigning the correct probability to this. An AGS in the same situation would be wrong because it would believe the three people and not assign the correct probability - an AGS belief system applies 100% probabilities even though the matter is not resolved with certainty. This reveals a lot about how some people think, because there are a lot of NGS belief systems out there which lock into unjustifiable beliefs instead of keeping their minds open. There are others though who are closer to NGI who refuse to accept any certainty at all, even when a proof is possible (under a set of rules which are taken to be true: if the rules are true, then the conclusion is certain).

Quote:
Will it also fly faster than a speeding bullet, and make hamburgers appear out of thin air on request?

This is just more "it doesn't exist in practice, therefore there's no practical limits to my wishful thinking" nonsense.

If I say it will be able to do things that are fully possible, why are you extending that into making out I'm saying it will be able to do things that are impossible? You're applying NGS reasoning.

Quote:
No, I'm saying that even if your pipe-dream AGI nonsense was a proven reality, it wouldn't decrease the number or severity of wars and would actually increase the number and/or severity of wars.

That's a possibility, but it will cause fewer wars than bad AGI, and bad AGI's only going to be prevented by using good AGI.

Quote:
This reminds me of a guy that claimed his OS would have AI that would auto-transform software into something compatible with his OS. I bet he feels silly now that it's 5 or 6 years further down the track.

Things take time, but I know what I'm building and I know what it will be able to do, so if you think I look silly at the moment, that situation isn't going to last. I hadn't reckoned on the health problems that I've had to battle against over the last few years, but I'm back to working at full speed at the moment.

Quote:
If it's AGI (and not AGS) then it's capable of finding a solution that it was not given (e.g. you give it a false dilemma and it rejects the given solutions and "invents" its own solution). For example, if you ask it "Is three multiplied by two equal to 4 or 8?" it might say "Neither, three multiplied by two equals 6". For example, you ask it "Where did the first humans come from, were they created by one or more God/s or evolved from simpler life forms?" and it might say "Neither, they came from ....."

That's mostly right: it might find something in the data that we've all missed which reveals an answer that we were created by a child outside of the universe playing with a universe creation game, but again it would apply probabilities to that which show that this is uncertain because that data might be deliberately misleading.

_________________
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming


Top
 Profile  
 
 Post subject: Re: Why free software is bad
PostPosted: Sun May 29, 2016 5:18 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
gerryg400 wrote:
There is no such thing as intelligence. What on earth are you talking about?

There is some stuff that you believe you have. I'm talking about it.

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: Why free software is bad
PostPosted: Sun May 29, 2016 9:18 am 
Offline
Member
Member
User avatar

Joined: Wed Jan 06, 2010 7:07 pm
Posts: 792
Why do we have to believe anything?

_________________
[www.abubalay.com]


Top
 Profile  
 
 Post subject: Re: Why free software is bad
PostPosted: Wed Jun 01, 2016 2:17 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Rusky wrote:
Why do we have to believe anything?

At least it worth to believe that you exist. Another way - you can believe that you aren't exist and next you need a theory that describes the world without you. It would be an interesting theory, but I prefer the simpler approach where I believe I still exist.

Actually it just about the base for reasoning. You can start wherever you want, but the number of resolutions on the way can be very different. Would one has no problem with time limits then it could be possible to go along many ways and to find what way is... more interesting? But what is "interesting"? May be it is possible to answer, but first it is required to have the ways behind.

Well, in fact I just don't know why we should believe in anything :)

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: Why free software is bad
PostPosted: Wed Jun 01, 2016 5:31 am 
Offline
Member
Member

Joined: Wed Feb 10, 2016 3:29 am
Posts: 31
Location: London, UK
TL;DR

Was Hitler mentioned already? Because according to recent study (on Reddit), almost every conversation (if it's long enough) ends up by mentioning nazis in some way or another. :)

_________________
Software development blog
Mobile Development Team
Web Development Team
UX/UI


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 144 posts ]  Go to page Previous  1 ... 6, 7, 8, 9, 10  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: Majestic-12 [Bot] and 46 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group