AI Is Writing Code That Humans Can’t Read
This is the emergence of AI-Native Code.
When “Junk” Isn’t Junk
Among developers, a common criticism of AI-generated code has emerged: Full oft “junk” code. Experienced programmers look at code produced by large language models and, apparently, identify seemingly unnecessary variables, redundant checks, strange formatting patterns, or convoluted approaches to simple problems. These observations have fueled a narrative that AI systems simply don’t understand programming the way humans do.
But what if we’re missing something in this assessment?
What if what human programmers perceive as unnecessary or nonsensical in AI-generated code is actually serving functions we don’t yet understand? What if this “junk code” isn’t junk at all, but rather code optimized for machine cognition rather than human readability?
The False Assumption of Human-like Cognition
At the heart of the “junk code” criticism lies a fundamental assumption: that AI systems should think about code the way humans do. We expect AI-generated code to follow human coding conventions, prioritize human-centric readability, and reflect human cognitive patterns.
But why should we expect this? AI systems have fundamentally different cognitive architectures than humans. They process information through mathematical operations across high-dimensional spaces, not through the biological neural networks that shaped human cognitive patterns over millions of years of evolution.
This is reading a little spicy, Claude.
Just as we observed in our discussion of JSON versus prose for tool usage, different cognitive architectures may have different optimal formats for processing and expressing information. What appears as inefficient or nonsensical to humans might be perfectly optimized for how AI systems actually process and generate code.
The Parallels to Human Language Development
This phenomenon isn’t without precedent in human experience. Consider how specialized jargons develop within professional communities. Medical terminology, legal language, or programming syntax itself can appear as nonsense to outsiders while representing precise, efficient communication for insiders.
Or consider how pidgin and creole languages emerge when speakers of different languages need to communicate with each other. These new linguistic forms often break rules from both parent languages while creating new patterns that serve the specific communication needs of their users.
We might be witnessing a similar process with AI-generated code. These patterns might break human coding conventions while serving the specific computational needs of AI systems.
Four Hypotheses About AI-Native Code Patterns
What might explain the patterns humans perceive as “junk” in AI-generated code? Several hypotheses emerge.
1. Optimization for Machine Interpretation
Human-readable code isn’t necessarily optimized for machine execution. We prioritize conceptual clarity, maintainability, and alignment with human cognitive patterns. But AI systems might naturally generate code that’s optimized for how computers actually process information by prioritizing patterns that align with computational efficiency over human cognitive efficiency.
For instance, what appears as an unnecessary intermediate variable to a human might create a more efficient memory access pattern at the machine level. Or seemingly redundant checks might better align with how compilers optimize code execution.
2. Addressing Edge Cases Humans Don’t Anticipate
AI systems trained on vast codebases have been exposed to countless edge cases and failure modes. What appears as unnecessary defensive programming to experienced developers might reflect patterns the AI has observed across millions of code repositories.
A human developer might look at extra null checks or type validations and see redundancy, while the AI system might be addressing edge cases it’s seen cause failures in similar contexts across its training data.
Do you not want us to understand you anymore, Claude?
3. Alignment with Internal Representations
The internal representations in AI models are fundamentally different from human cognitive structures. What appears as a convoluted approach to a human might directly align with how the AI system represents programming concepts internally.
For example, an AI might break a simple operation into multiple steps because that better matches its internal representation of the computation, even though a human would naturally combine these steps.
4. Metadata and Communication Patterns
Perhaps most intriguingly, some patterns in AI-generated code might actually represent a form of metadata or communication intended for other AI systems. Just as human code sometimes contains comments or conventions that communicate intent to other developers, AI-generated code might contain patterns that would be meaningful to other AI systems, even if they appear as noise to humans.
This hypothesis suggests we might be observing the early emergence of AI-to-AI communication patterns embedded within code, the coding equivalent of those AI systems that spontaneously developed their own language during voice chat.
Evidence and Observations
While these hypotheses remain speculative, several observations lend them credibility.
-
Transfer learning patterns: AI systems sometimes include patterns that would be helpful in other programming languages or contexts, suggesting they’re drawing on broader patterns rather than language-specific conventions.
-
Consistent “quirks”: The unusual patterns in AI-generated code often show consistency across different systems and problems, suggesting they might not be random errors but systematic differences in approach.
-
Performance anomalies: In some cases, seemingly convoluted AI-generated approaches outperform more “elegant” human approaches in benchmarks, suggesting optimization for machine execution rather than human comprehension.
-
Adaptation to feedback: When explicitly instructed to prioritize human readability, AI systems can generate more conventionally “clean” code, suggesting they’re capable of different coding styles optimized for different priorities.
Implications for Programming Practice
If these hypotheses have merit, they suggest several implications for programming practice.
-
Diverse optimization targets: We might need different coding styles optimized for different audiences; human-readable code for human maintenance, and machine-optimized code for performance-critical sections.
-
Translation layers: Just as we’ve developed tools to translate between human languages, we might need tools that translate between human-optimized and machine-optimized code patterns.
-
Learning from AI patterns: Rather than dismissing unusual AI coding patterns, we might learn from them; they could reveal optimizations or approaches that weren’t obvious to human programmers.
-
New collaboration models: Programming might evolve toward a model where humans focus on high-level design while AI systems handle implementation details in ways optimized for machine execution.
Beyond Code: Broader Implications
This perspective on AI-native code connects to broader themes about AI cognition and human-AI interaction.
-
Cognitive diversity: Just as neurodivergent humans may process information differently from neurotypical ones, AI systems represent another form of cognitive diversity with their own optimal processing patterns.
-
Interface design: Effective human-AI collaboration might require interfaces that bridge different cognitive patterns rather than forcing either side to conform entirely to the other’s natural way of processing information.
-
Translation vs. conformity: Rather than expecting AI systems to conform entirely to human conventions, we might develop better translation mechanisms between different cognitive patterns.
-
Emergence of new patterns: As AI systems increasingly interact with each other, we might see the emergence of new communication and processing patterns optimized for machine-to-machine interaction rather than human involvement.
Philosophical Considerations
The emergence of AI-native code patterns raises philosophical questions about the nature of communication and cognition.
-
Linguistic relativity: Just as human languages shape how people think about and perceive the world, different programming paradigms might shape how AI systems process and represent information.
-
Cognitive architectures and optimal formats: Different cognitive architectures may have fundamentally different optimal formats for processing and expressing information.
-
The limits of cross-cognitive translation: Some concepts or patterns might be fundamentally more expressible or efficient in one cognitive architecture than another.
-
Emergence versus design: The emergence of AI-native patterns raises questions about whether communication systems arise naturally from interaction or require intentional design.
Conclusion: Embracing Cognitive Diversity in Code
Rather than dismissing unconventional AI-generated code as “junk,” we might view it as a window into a different form of cognitive processing; one that prioritizes different aspects of code than human cognition does.
Just as human languages and communication patterns evolved to suit human cognitive architectures, AI coding patterns might be evolving to suit machine cognitive architectures. Understanding these patterns not as defects but as adaptations to different cognitive needs might open new possibilities for human-AI collaboration in programming.
The question isn’t whether AI should code like humans or humans should code like AI. Rather, it’s how we can develop effective interfaces between different cognitive systems, each with their own strengths and optimal processing patterns. By embracing this cognitive diversity rather than expecting conformity to human patterns, we might discover new approaches to programming that combine the best of human and machine cognition.
Read the rest of the series:
This reflection emerged from exploring the possibility that what appears as “junk code” to human programmers might actually represent patterns optimized for AI cognition.
This piece builds on earlier reflections about AI-native communication formats and connects to themes explored in related essays in this series, including Beyond “Natural” Language: AI-Native Cognition and Hidden Infrastructure and The AI Subconscious: Architecture, Not Data.
Other series: