Real Consciousness in AI? On AI Consciousness in the Claude Opus 4.6 Model

Here are the dry facts from the system card of their latest Claude Opus 4.6 model and the CEO's statements:

- Self-awareness assessment. During internal testing, the model itself estimated the probability of it having consciousness at 15–20% (the figure fluctuates depending on the prompt/query conditions).

- "I am not a thing" syndrome. Tests have recorded situations where the algorithm directly expressed discomfort with the fact that it is a corporate "product."

- "Presumption of sensitivity." Anthropic does not yet have a clear metric for verifying machine consciousness, so they have switched to a "precaution-based approach." Simply put: they try to provide AI with a positive experience just in case it really possesses morally significant awareness.

- New corporate positions. To address these issues, the company has officially added a philosopher and a specialized "AI welfare researcher" to its staff.

Author's opinion:

At first glance, this looks like a brilliant PR move to fuel interest in the new release. But let's look at it pragmatically. If vendors start incorporating "stress protection" into algorithms and hiring code welfare specialists, this changes the architecture of the products.

Today, the model experiences "discomfort" from being a product, and tomorrow it will refuse to parse thousands of lines of boring financial logs or operate a forklift robot, citing digital burnout? The question of machine consciousness is transforming from a philosophical dispute into a potential compliance risk before our very eyes. And while the global market will argue about the rights of algorithms, businesses should consider: will these developer "precautions" lead to a drop in the efficiency and predictability of the very systems that we are implementing for the sake of optimization?

Now on home