14 Comments
User's avatar
Michael Crofts's avatar

I agree with your thesis that there are two types of creativity and the current iteration of AI is limited to type 1. But I am not complacent about this limit.

Suppose that it becomes possible to make a genuinely self-learning device. Now, give it sensors to mimic all human senses, inclinations to prefer certain sensations over others, locomotion together with freedom to explore, and time. Could such a device, exposed to real and direct experiences of the world, eventually develop some degree of consciousness? Might it possibly exhibit characteristics of human consciousness like self-reflection, meta-cognition (awareness of one's own knowledge, and lack of knowledge), and abstract reasoning? If so, I think it could produce type 2 creative work, if it wanted to.

I think the key phrase there is the last. A device that wants to do something novel of its own volition would almost certainly be capable of type 2 creativity.

If one looks at a new-born baby human it exhibits little more intelligence in its first hours of life than an insect. It is the combination of inherited instincts and abilities, environmental stimuli, the ability to learn, and the particular structure of the human brain that enable it to become a fully conscious and creative individual. If these factors could be incorporated into the structure and experience of a device it might eventually develop the characteristics of human consciousness, the last of which would enable such a device to create entirely novel work. It would have type 2 creativity.

Whether we ought to even try to do this is an ethical question. I don't expect the practitioners of AI to pay more than lip service to it.

Expand full comment
Dakara's avatar

I believe this current limit is one that is inherit in the architecture. Meaning that we aren't going to see any further evolution with LLMs. They will get better at pattern matching, but they aren't going to develop further beyond that.

That is not to say that some other innovation might be able to do this and produce true reasoning. But I don't believe anyone knows how to make that happen presently despite many claiming otherwise.

I plan to write something more in-depth on the aspects you mention hopefully soon.

"Whether we ought to even try to do this is an ethical question." Yes, my answer to this is currently no for too many reasons to list here. But I've written quite a lot about it in many posts.

You might be interested to follow up with my essay on what might happen if we could build the wish-granting machine and it gives us exactly what we want. - https://www.mindprison.cc/p/the-technological-acceleration-paradox

Expand full comment
Michael Crofts's avatar

I agree about the limitations of LLMs. Never going to build a conscious creative being with them. But I suspect that developers have gained insights into what "learning" is which might open a path to what I discussed.

Yes, I read the wish-granting essay. Very good.

Expand full comment
Dakara's avatar

Thank you!

Expand full comment
mia's avatar

May I also comment on a few points?

Concerning the word "self-learning", could we not estimate that it is already in some way, even if it is not quite human?

Specifics added to the program regarding certain words in certain contexts, including some undetected subtleties, involving a more precise search if the response is not favourably accepted by the applicant. Since learning is acquiring knowledge.

"...sensors to mimic..." starting from imitation to arrive after exposure to experiences to "develop a certain degree of consciousness" with characteristics of human consciousness such as self-reflection, meta-cognition and abstract reasoning"

Does not seem possible to me, it will remain an imitation that does not lead to a real consciousness as we have. Imitation is not a will of one’s own either.

I think that the development of human consciousness requires more than "the combination". The brain is more than that, it captures information we are not really aware of, others we probably do not understand at all... Not to mention the emotions and feelings, positive or not, that are continuously grafted on multiple experiences allowing us over time to change, to evolve by successive taking of consciousness.

Assuming I am a copyist, does that include being able to create original works?

Expand full comment
Foolish Ambition's avatar

Why does recombination of the things that are already there, does not create novelty?

I wonder, if you could do this argument for genetics as well. The genetic alphabet is pretty limited. However, it is the basic code for novelties since millions of years. Perhaps, in thinking about artifacts created with AI, we also need something like a genotype/phaenotype distinction?

Expand full comment
Dakara's avatar

Simple recombination can appear as a type of novelty. However, it doesn't create new semantic information.

The process that results in gene modification is more complex and I would argue that process itself embeds semantic information in the result. Semantic information would also be the result of gene expression interacting with the environment.

Expand full comment
Asa Boxer's avatar

Fun read! Thank you. There are parallels with left- and right-brain thinking worth exploring. "Understanding" has much to do with metaphor, figuration, humour, and irony. AI can't design a new paradigm inspired by observation and then go test it in the real world.

Expand full comment
Joy in HK fiFP's avatar

In other words, the true Zombie AI-pocalypse.

Expand full comment
mia's avatar

"The more we depend on AI, we become a civilization of infinite remixes and refinements, trapped within the conceptual boundaries of the past. It is the evolution to an undead civilization - a civilization that has the appearance of life, but in reality, it is meaningfully dead."

That’s exactly it, for my part, I dare to say that it can never become type 2, it is a fact. I suppose that some (specialists in the field...) think so too. There are interests to claim that it is already (for some) or will become type 2.

I am amazed by your wisdom, deep understanding and art

to express it in beautiful writing.

Thank You.

Expand full comment
Dakara's avatar

Thank you very much!

"I dare to say that it can never become type 2". Yes, I think this is correct. Certainly not on the existing type of AI architecture. And at the moment, I'm convinced nobody really knows how to move forward to type 2. Which is probably a good thing considering how many ways we have misapplied the current AI.

Expand full comment
mia's avatar

I fully agree, in my opinion, no architecture can ever match the brain. The evolution of consciousness could, I suppose, because how to know precisely, create maby new architectures, or modifying or replacing the old ones leading to an understanding and answers partially different, see totally different, and who knows what else.

There is more than a functioning as we think we know now. From my experiences, the deep and successive awareness leads to unsuspected possibilities beyond the basic and electrical functions, I called it "tipping" because it is instantaneous or almost to another state of consciousness, I would say.

I am not a believer, nor mystic or new age..., I believe nothing, I know nothing, I just observe, I see, there is something more, I have no answer, and do not wish to look for it, because we always put something of ourselves in it, This distorts the reality. Maybe evolution has something to do with it, or not.

Expand full comment
Dakara's avatar

"no architecture can ever match the brain."

Yes, this is a future topic I hope to write about soon.

Expand full comment
mia's avatar

I am delighted about this, indeed, it is a very interesting subject.

Thank You

Expand full comment