Can a Computer Program Exhibit Sentience?

Well, first I would posit, having simulated electronic/computer chips, boards, systems, etc. (gate level and functional/behavioral) in my career (plus being a S/W engineer for decades), we can simulate LOTS (subatomic particles to metaverses to big bangs, etc.)!

Now, WRT a particular program/system/bag-o-bits being characterized as sentient, it’s going to take more than one (or more) Google engineer’s opinion to convince me there’s one (or many) brewing in some cloud(s) deep in the recesses of the dark/light/shiny internet/web. If one reviews Alan Turing’s oft mangled “test for AI”, it’s simply a test to see if an AI can fool some test subject’s human intelligence sitting at a typewriter/keyboard/monitor asking questions. Well, as current events have shown us, I’m not going to speculate on what percentage of the 7+billion people on this planet could themselves be characterized as “intelligent” (sorry for the dig, but not all wetware SHOULD be characterized as “intelligent”/”sentient”). Also, I would posit that any authoritative assessment/quantification/qualification of sentience or intelligence in ANY system would need the knowledge/expertise of a/many good linguists, psychologists (cognitive and others), sentience systems experts, behaviorists, “Theory of Mind” experts/scientists, etc., and one of the last opinions I would accept on sentience is some enginerd spewing (IMHO) nonsense on social media.

I’ll now convey to you a gedankenexperiment posited by my dear friend Jim Huffman, former Director of (a company I’ll not mention due to the fact I’m scared to)’s “Center for Emerging Computing Technology” wherein they had created a simulation (I’ll label a “creature” (ask Dr. Google about “artificial life”), and Jim might characterize it differently). A renowned university in Austin assessed the intelligence of their creature as “a slightly learning disabled 5-year-old”. Also please review Nobel Laureate Dr. Gerald Edelman/et al “Darwin III” (maybe IV/V/VI?) as well as Nobel Laureate Dr. Murray Gell-Mann’s/et al A-life creatures (I was fortunate to meet both gentlemen back in the 90’s)

The gedankenexperiment (this is my recollection/interpretation of a notion Jim postulated):

Let’s assume you have 2 simulated a-life creatures in 3/4/(x) dimensional space, each can recognize/sense things in their environment, they can move about according to their desires/etc., they each have a “will”, so can choose what they do and where they go in their virtual world, complete with flora and fauna, etc. (maybe even physics works 😉 ). The only instructions they are given is “do not pick this particular fruit”. If one of them picks that “forbidden fruit”, THEN WHAT?

BTW, back in mid-1960s Joseph Weizenbaum wrote a program “Eliza” while working at the MIT Artificial Intelligence Laboratory,which was (my characterization) a conversational (I’ll loosely use the CNRI trademarked term) “knowbot” which would emulate/simulate a psychoanalyst’s dialog with the user (you, the “patient”). I explored the code a bit, and that program could very easily (with 2-3 lines of code) be modified to respond to the question “do you have a soul” with “Yes, and it’s chartreuse and smells of sweet lavender” (or any string of nonsense you want it to emit in response to this or similar questions). (I have 10-15 pages of dialog I had with the program from back in the 70’s, frankly I said/asked some embarrassingly disgusting things in that dialog (just trying to bang the bounds of its capabilities), but it’s in that archaic “pulp/paper” media/form, one day maybe I’ll OCR it and post a link here (perversity redacted of course)). Also see Terry Winograd’s SHRDLU which is an early natural-language understanding computer program that was developed by Dr. Winograd at MIT in 1968–1970.

(have I made your wetware a mess? Besides, if “it’s alive”, it’s way past time to be thinking of steel collar rights.)

WRT future of artificial sentience, just look at how we’ve progressed relative to computers and computation in just the past 50 years. Extrapolate that over the next 500+ and imagine the capabilities to simulate everything from super-subatomic to meta-metaverses. Rest assured the feeble sentience we enjoy today will be utterly archaic compared to the sentience simulatable by those artifacts.

For an interesting read on this/similar topic, see “What to Think about Machines That Think: Today’s Leading Thinkers on the Age of Machine Intelligence” (🙂

Stv

In a project I was involved with at Vanderbilt University mid 80’s we were researching AI in Manufacturing, specifically the creation of a knowledge based expert system that would be used for troubleshooting an electronics assembly manufactured by a no longer existing telecom company. The “expert(s)” that we were supposed to have at our disposal for consultation(s) to perform knowledge engineering/acquisition used different criteria for how they would repair/tune the assembly on different days and could not put into words any rules/heuristics/whatever they used for troubleshooting (there WAS a “troubleshooting manual” that existed for a prior/discontinued version of it that was useless for this project (I was brought in AFTER this particular assembly was chosen)), so we had no knowledge we could capture and codify in the expert system therefore had to simulate missing/wrong components in the circuit to derive rules for the expert system (the ES could eventually correctly diagnose and recommend repair at >85% accuracy).

Addendum:

Bag-o-bits and the genetic algorithm – while researching/developing the aforementioned expert system I was curious about learning systems as things changed so often in the design/manufacturing/production stages during the evolution of this product, WRT creating a KB artifact I surmised the only way to address this was via machine learning systems. The/a LS technology de jour at the time was the genetic algorithm, with which you would change a/some bits in a program/application as a (I’ll call it “seed”) and see how that played out (using some form of “goodness” quantifier/qualifier that determined whether a trait should be kept or discarded, just like ye old Darwinian “survival of the fittest”). I was also lucky enough to attend the 1985 International Joint Conference on Artificial Intelligence held at UCLA, attended/concentrated on sessions on “learning systems”, ““cognitive modeling”, and “machine perception”.

Share if you like!

Read more?

2 thoughts on “Can a Computer Program Exhibit Sentience?

  • Peter Cochrane

    An isolated collection of code will not achieve sentience. For it to be sentient in the biological sense it needs access to sensors and actuators in order to ‘experience’ its environment. For it to be sentient in the machine sense it needs access to a network in order to experience a closed or open environment.

    1. Harold Stephen (Steve) Hayden

      Thx for comment, I absolutely concur. Even more convincing that Google’s “sentient AI” assessment by it’s (now former) engineer is unfounded.

  • Leave a Reply

    Your email address will not be published.