Structure of thought

Adner Valle

Intellectual property · authorship · technology

ES

Editorial Reading

Authorship in the Algorithmic Era: Refuting the "Physicalist Fallacy" in Intellectual Property

I. Abstract and statement of the problem:

A tension has always existed between intellectual property law and innovation, stemming from the emergence of disruptive technological paradigms and the conservatism of traditional legal doctrines. From the invention of the movable-type printing press in the 15th century, through chemical photography and digital cinematography, to the computer age, various international regulatory frameworks have had to expand and adapt. This adaptation has required recognizing that human creation is rarely an unmediated organic act; on the contrary, expression and inventiveness are intrinsically linked to external tools and complex technical scaffoldings that facilitate the materialization of abstract imagination (Ginsburg, 2020).

However, the advent of generative Artificial Intelligence (AI), large language models (LLMs), and autonomous scientific discovery systems has exposed a regrettable failure in understanding their operation and determining the authorship of a work.

Various authorities have based their criteria on a legal, administrative, and sociological presumption—entirely obsolete—that authorship or inventiveness indispensably requires direct physical and muscular execution by a human being (Buccafusco, 2016). Under this criterion, offices and courts systematically eliminate the value of cognitive effort, ideation, and creative direction, demanding a mechanical human activity that copyright law itself had already moved beyond.

We currently find ourselves at a crossroads characterized by institutions' inability to distinguish between the intellectual origin of a work and its material fixation. Multiple jurisdictions and institutions have opted, with questionable logical support, to deny copyright and patent protection to works and inventions that are algorithmically assisted or generated. Bodies such as the United States Copyright Office (USCO) demand a demarcation of human versus algorithmic contributions, presuming that the "unpredictability" of the model extinguishes the causal nexus between the user's mind and the final expression (U.S. Copyright Office, 2025).

However, if we pause to reflect on human attitudes toward disruptive technologies, we can see that the current refusal reveals itself as a recurring psychological and institutional reaction. Historically, societies have experienced moral panics before any advancement that threatens to automate human activity, fearing an ontological erosion of anthropological identity (Jones, 2006).

This essay proposes to demonstrate that human creativity has never operated as an act of generation *ex nihilo* (out of nothing); rather, cognition is, at its deepest level, a combinatory process and one of superintendence, where creative control and conceptual direction constitute the true and only essence of authorship (Boden, 2004). In this sense, delegating mechanical execution to a non-human entity—whether it be a 3D rendering engine, a camera, or a neural network—cannot and should not eliminate human authorship. On the contrary, it elevates it to the superior plane of ideation, constraint design, and the directive process.

We firmly believe that denying patrimonial and moral protection to creators and inventors who master AI is equivalent to paralyzing technological progress and betraying the constitutional mandate of innovation law—though we recognize that clear regulation must be established to grant protection to these figures.

II. Historical resistance and institutional adaptation

To understand the current resistance, we must analyze the nature of innovation. History exhaustively documents that the introduction of instruments that alter means of production or challenge traditional perceptions of the human body and mind generates profound friction.

Law has historically operated as a mechanism of social containment that initially reacts through prohibition, censorship, or criminal punishment. However, pressed by macroeconomic needs and scientific progress, the law always ends up abandoning prohibitionism to embrace frameworks of regulation, licensing, and governance that assimilate disruption (Mueller, 2021).

It is reasonable to consider that the reaches of artificial intelligence are not yet fully understood, and fear of the unknown has likely influenced the prohibitionist determinations being adopted. However, what disruptive technology has not caused fear? Which scientific advancement has not faced resistance? Evidently, most new technologies and disruptive scientific advances have been halted at some point in history. Nevertheless, it is the law—and we, the scientists of the law—who must establish the rules that allow human development in harmony with the other sciences and arts that will be modified in the coming years due to artificial intelligence.

Luddism and the transfer of power

The Luddite movement, which emerged in central and northern England in the early 19th century (1811-1816), constitutes an exemplary precedent for understanding the dynamics of rejection toward automation.

Contrary to popular belief, which caricatures and ridicules Ned Ludd's followers as individuals with irrational technophobia and intellectual blindness, research reveals they were highly skilled artisans: cotton weavers, hosiers, and croppers (Binfield, 2004). Their insurrection was not actually against the machine; it was against the reconfiguration of social relations of production and the impact they would suffer by being unable to practice a trade they had performed for generations.

In that vein, at the height of the Industrial Revolution, exacerbated by the Napoleonic Wars and the legal prohibition of trade unions (Combination Acts of 1799 and 1800), textile machinery represented an instrument used to degrade labor standards, devalue human craft, and fracture the community fabric.

The immediate response of the British state was punitive and protective of the status quo of capital owners. In February 1812, Parliament passed the Frame Breaking Act, classifying industrial sabotage as a capital offense (Thompson, 1963). Parliamentary opposition to this draconian measure was marginal, most notably Lord Byron's historic speech in the House of Lords, where he severely questioned the morality of a system that shed citizens' blood to protect the efficiency of machines (Byron, 1812).

Today, proponents of "neo-Luddism" argue that the algorithmic autonomy of AI threatens to become independent of human ethical ends. Nevertheless, historical lessons reveal an essential point: initial resistance stems from fear of alienation and economic obsolescence.

It is a well-known fact that, despite violent conflicts, the legal system did not eradicate the looms; it adopted them, and in doing so, it cemented modern labor law.

In that sense, prohibiting AI in intellectual property registries is the contemporary equivalent of destroying mechanical looms—which we consider a reactionary measure that the course of time and the economy itself will eventually correct.

While this precedent is illustrative, we must acknowledge that the introduction of these novel technologies has affected and will continue to affect a portion of the population. However, it is up to all of us to find a solution to minimize the impact and strengthen human growth, as has occurred historically.

Evolution in medicine

Another area that has always faced resistance is the field of medical science, which provides precise historical parallels on how disruptive practices transition from condemned aberrations to regulated pillars of public health.

For nearly fifteen hundred years, human dissection was proscribed in the West, hindered by theological doctrines that demanded absolute preservation of the body *post-mortem* to guarantee eschatological resurrection (Ghosh, 2015). When empirical medicine began to demand profound anatomical analysis, English law restricted dissection to a mechanism of infamy and criminal punishment. The Murder Act of 1752 stipulated that only the bodies of executed murderers could be dissected. However, in the 19th century, massive demand for cadavers by growing schools of anatomy generated a black market, whose breaking point was the West Port murders in Edinburgh (perpetrated by Burke and Hare in 1828) and the "London Burkers" murders in 1831.

Faced with social revulsion, the British Parliament understood that restriction limited scientific progress and fostered clandestinity. The enactment of the Anatomy Act on August 1, 1832, transformed the legal framework, abolishing the link between medical advancement and criminal punishment, secularizing the body through a licensing system for anatomists, and regulating donation.

An identical pattern was observed in the history of immunology. When country doctor Edward Jenner introduced the empirical smallpox vaccine in 1796, inoculating infectious cowpox matter into humans, the proposal was again met with indignation. Conservative sectors argued that injecting animal lymph violated the natural order and contaminated the "purity" of the human species.

When the Vaccination Acts (1840-1853) made immunization mandatory in the United Kingdom, massive resistance erupted, with Anti-Vaccination Leagues forming in epicenters like Leicester. Social pressure forced the law to flex, and in 1898, legislation introduced the concept of the "conscientious objector" for the first time, resolving the ethical friction without halting public health progress (Fitzpatrick, 2005).

Consider the social, economic, scientific, and health delay we would face today if the law had not evolved to regulate these acts—the number of diseases, annual deaths, and reduced life expectancy. However, thanks to the law and actors who understood that prohibitions are not the solution, we currently enjoy health levels unthinkable 150 years ago.

Despite the well-known advancement derived from vaccines, body dissection, and other related practices, the health industry remains one of the most regulated globally due to its implications. Therefore, there is no justification for limiting scientific progress derived from tools like AI when precedents exist showing that even delicate matters have found alternatives in regulation.

Intellectual property facing fixation technologies

Copyright is a historical record of tensions between purist visions of human execution and the emergence of new creative devices. Every technology that automated or reduced the physical effort of the artisan was initially marginalized by patent offices and courts, accused of being a simple mechanical process devoid of human creative activity.

To illustrate this temporal myopia, it is necessary to analyze the technologies that today form the basis of global cultural industries:

Disruptive TechnologyInitial Rejection ArgumentDefinitive Legal Evolution and Assimilation
PhotographyConsidered a purely mechanical and photochemical process.In Burrow-Giles Lithographic Co. v. Sarony (1884), the U.S. Supreme Court held that photography was protectable due to the photographer's "intellectual conception."
CinematographyEarly film was dismissed as an automated report of real events (a utilitarian record) or a simple series of disconnected photographs.Pioneers (e.g., Thomas Edison) had to register their films as thousands of printed photos. Total legal assimilation arrived with statutory amendments such as the Townsend Amendment of 1912 in the United States.
Phonograms (Music)Acoustic fixation in piano rolls or wax cylinders was not considered a "copy" or "writing" because it was unintelligible to the human eye, according to White-Smith v. Apollo Co. (1908).Sound recordings obtained full protection through the Sound Recording Act of 1971.
Computer ProgramsBinary object code was initially rejected for being utilitarian, unintelligible to human reading, and lacking aesthetic narrative.Following thorough analysis (CONTU reports), the 1980 amendments to the U.S. Copyright Act included software as a literary work, protecting the instructions that make the machine function.

The most direct legal analogy for artificial intelligence stems from the 1884 ruling in *Burrow-Giles Lithographic Co. v. Sarony*. When it was argued that a photograph of Oscar Wilde could not possess copyright because it was the byproduct of an apparatus, the Court determined that the machine is simply a conduit.

The tribunal ruled that Sarony was the "originating author" because the work derived from his "original mental conception" and his stage direction (Ginsburg, 2020).

Thus, if a human operator conceives an architectural vision, invests effort in establishing the parameters of a language or diffusion model, designs iterative prompts, and meticulously selects the generated outputs, we can determine that AI fulfills exactly the same function as Sarony's camera.

Any legal scholar can deduce that rejecting patrimonial protection today for algorithmically assisted works is to return to a prohibition overcome 150 years ago.

Naturally, authorities must establish clear rules to verify that the result obtained is the product of genuine intellectual labor by a prompt engineer.

III. The pharmaceutical dilemma

Setting aside aesthetic and cultural debates over copyright, the real impact of requiring manual human execution takes on existential and macroeconomic tones in the field of biotechnological patents and the pharmaceutical industry.

Drug development operates under the strict "Incentive Theory." The temporary and exclusive exploitation guaranteed by a patent is the only viable mechanism designed to resolve the market failure intrinsic to innovation. It serves to compensate for the enormous costs, time, and extremely high risk levels (clinical attrition rates) necessary to bring a molecule from the lab to the patient, versus the nearly zero replication costs for generic manufacturers (DiMasi et al., 2016).

The traditional pharmaceutical R&D process requires cycles of 10 to 15 years and mobilizes capital investments exceeding one to two billion dollars per approved compound. Advanced artificial intelligence promises to collapse the well-known Eroom's Law.

This is easy to understand because, through deep learning neural networks, *in silico* generative chemistry, and protein folding prediction, AI can identify therapeutic solutions, simulate clinical trials, and optimize pharmacokinetics in a matter of months, substantially reducing initial costs. However, the antiquated requirement that a human must mentally and mechanically "conceive" every link in the invention jeopardizes this progress.

To exemplify the gravity of maintaining strict execution in contemporary regulation of biotechnological patents and any other branch, it is essential to analyze the following scenario of corporate decision-making facing an emerging pandemic pathogen, derived strictly from current market metrics and realities:

Critical Corporate Decision VariablePath 1: Traditional Scientific Development (Human / Empirical)Path 2: AI-Driven Development (Algorithmic / Generative)
Projected Preclinical Discovery Time4 to 6 years of testing.2 to 18 months via predictive modeling.
Estimated Identification and Optimization CostExtremely High (Frequently exceeding $500 million in early stages).Dramatic reduction.
Patentability Status and Legal Certainty (Current)High legal certainty; manual human conception and execution are clear and documentable.High legal uncertainty; imminent risk of formal rejection.
Investment Committee and Feasibility DecisionApproved. Formation of the exclusive right and return on investment (ROI) are guaranteed.Rejected or Postponed. Risk of falling into the public domain and loss of capital.
Underlying Public Health Cost (Public Interest)Tens or hundreds of thousands of victims during the waiting years.Theoretically rapid treatment, but R&D and clinical trials are abandoned due to a total lack of market incentives.

In conclusion, artificial intelligence is a powerful tool that can save thousands or millions of lives and make production costs more efficient to the benefit of patients.

Jurisdictional fragmentation and the evolution of the German Federal Court of Justice (BGH)

The global debate over algorithmic inventiveness has deepened with the DABUS project (Device for the Autonomous Bootstrapping of Unified Sentience), led by Stephen Thaler.

DABUS autonomously generated technical designs, leading to patent applications naming the AI as the inventor. Western judicial orthodoxy, represented by the U.S. Federal Circuit Court of Appeals in *Thaler v. Vidal* (2022) and the U.K. Supreme Court in *Thaler v Comptroller-General* (2023), ruled rigidly that patent laws unequivocally require the inventor to be a "natural person," denying the grant due to the machine's inability to exercise legal capacity or assign rights.

However, this situation has fractured thanks to the profound study of European jurisdiction. In a ruling issued on June 11, 2024, the German Federal Court of Justice (*Bundesgerichtshof* or BGH, resolution X ZB 5/22) addressed the DABUS case by establishing a regulatory solution essential for the 21st century.

The BGH clarifies that the naming requirement does not force the invention to be human-made, but simply demands identifying a human with the right to represent that "property" in the legal world. This interpretation bridges the gap between material reality (machine creation) and legislative requirement (human inventor), protecting the patent while acknowledging the technological means.

The BGH reaffirmed the principle that an artificial intelligence lacks legal personality and therefore cannot appear in the formal register as an inventor. However, the German Court established an important distinction: inventions generated through the use of artificial intelligence are fully patentable under German law.

Germany's highest civil court consolidated the doctrine of "inventive causality," determining it sufficient to name as inventor the natural person who significantly influenced, prepared, or caused the AI system to generate the technical solution.

The BGH reasoned that attributing the invention to the human does not require the human to have made an independent inventive contribution separate from the machine; the act of configuring the problem, selecting training data, instructing the neural network, and recognizing the technical utility of the output is a causal and legally significant act that legitimizes human ownership. This interpretation saves the incentive system because it prohibits the absurdity of granting rights to software while allowing exclusive exploitation of an invention obtained using these means.

This is a viable solution to the problem; however, it needs to be materialized in clear laws that allow all stakeholders to have certainty that their AI-generated inventions and works will be protected and will not depend on the interpretation of a court that may change its criteria.

Strategic delay and public interest

If patent offices reject the German doctrine and maintain a restrictive interpretation requiring manual human intervention in chemical "conception," the global market will suffer a devastating documented phenomenon known as "strategic delay."

Anyone will agree that, in normal times, this represents market inefficiency. However, in times of global public health crises (such as the COVID-19 pandemic), this strategic delay costs thousands of lives.

Faced with regulatory uncertainty about being unable to obtain or defend a patent for a molecule designed rapidly by an AI (risking it falling prematurely into the public domain), pharmaceutical executives will be incentivized to act against scientific efficiency. Corporations will order their scientists to abandon algorithmic simulations in favor of slower, costlier *in vitro* analog processes, solely to document a "physical human execution" that satisfies archaic patent examiners.

Evidently, the law must evolve to establish the necessary mechanisms so that the use of these tools is reflected both in the patient's health and economy, by not having to perform most of the time-consuming processes.

IV. Technological neutrality

The copyright system's refusal to protect works generated through generative algorithms exhibits an evident contradiction when contrasted with modern interpretive principles and the peaceful assimilation of other technologies in creative industries.

The Supreme Court of Canada, in "Canadian Broadcasting Corp. v. SODRAC 2003 Inc." (2015 SCC 57), determined that while technological neutrality cannot autonomously rewrite the express words of a legislative statute, it does operate as an indispensable guiding and normative principle to maintain the delicate balance between users' rights and authors' fair remuneration.

This principle dictates that intellectual property law must not discriminate or impose evaluative asymmetries based solely on the medium, platform, or technological mechanism used for the production, fixation, or distribution of a work.

The restrictive USCO precedent

Contradicting this principle, administrative agencies have adopted a negative approach toward artificial intelligence. In January 2025, the United States Copyright Office (USCO) published its long-awaited *Copyright and Artificial Intelligence, Part 2: Copyrightability Report*. In this report, the USCO reiterated its previous administrative criteria (as in the *Zarya of the Dawn* case): it ruled that human authorship is the unshakeable foundation of protectability and, therefore, material autonomously generated by AI models lacks copyright.

We consider the most problematic aspect of the 2025 report to be its conclusion on prompt engineering. The USCO stipulated that "based on the operation of current technology, prompts alone do not provide sufficient control over expressive elements" to grant authorship.

The institution argues that generative models operate in an unpredictable and non-deterministic manner, introducing a level of digital noise and interpolation that separates the user's will from the formation of the final pixel or syntax. Only substantial subsequent human modifications or the curatorial arrangement of AI output merit partial protection.

But what happens when the author modifies parameters, giving new instructions, adding or removing elements until reaching a desired result? While we may agree with the authority that a prompt in general terms might not be sufficient to obtain protection, the iteration until reaching a desirable result for the prompt operator should be protectable. For this, states should establish regulations allowing the protection of these works under clear criteria to verify that the final result is the product of constant interaction with the AI model and not a random result without major intervention.

Computer-Generated Imagery (CGI)

The fragility and absolute fallacy of this argument based on direct "control" are contradicted when examining entire industries where physical execution is deeply delegated to algorithmic architectures, yet protection exists.

The clearest example lies in computer-generated imagery (CGI). In high-value franchises like *Avatar* (James Cameron) or in the Pixar ecosystem, the overwhelming aesthetic does not exist in the physical world nor is it captured by traditional lenses. Directors do not hand-draw the millions of polygons that compose complex fluids or digital fur. They interact through high-level commands, adjusting physical parameters in powerful rendering engines (such as Unreal Engine or Maya). It is the software algorithm that executes the probabilistic calculation of light bounce (ray tracing) and gravity, generating textures that the human cannot mathematically predict or control with the level required by the authority.

Despite this technical distance between the directive command and the material fixation of the scene, copyright law recognizes directors and studios as integral authors of the audiovisual work.

This comparison reveals unjustified discrimination: if the law accepts that James Cameron retains absolute authorship while delegating massive information processing to CGI server farms, the refusal to recognize the curatorial and semantic effort behind an iterative prompt in Midjourney, ChatGPT, or any other AI is an act of arbitrary discrimination that violates technological neutrality.

To verify this, we present the following table:

Analyzed and Implemented TechnologyDegree of Human Physical Execution over the Final ResultCurrent Treatment in Intellectual Property DoctrinePredominant Institutional Justification
Modern Digital PhotographyMinimal and highly automated. The user frames and presses a button; sensors and processors determine focus, ISO, white balance, and light correction.Fully Protected WorkThe choice of the moment, the subject, and the framing configuration constitute in themselves the intellectual creative and original act.
CGI Software (e.g., Avatar Production)Zero over individual pixel formation. Autonomous algorithms process fluid dynamics, light collisions, and complex textures.Fully Protected WorkMaster guidelines, scene curation, and the director's conceptual vision guide and subordinate the algorithmic tool.
AI Image/Text Generation (Midjourney/LLMs)Zero over pixel materiality or predictive syntax. The user exercises control through iterative textual prompting and parameterization of constraints.Rejected (Relegated to the Public Domain)The human supposedly "does not form the image"; the machine "interprets" unpredictable commands autonomously.

V. Neuroscience and the Mastermind doctrine

In the following section, we will demonstrate that, to overcome the requirement for physical execution and manual manipulation, the law must rely on the convergence of the philosophy of art, judicial precedents on collaboration, and neuroscientific evidence validating the intellectual effort of "ideation."

Conceptual art

Copyright law rests on the idea-expression dichotomy, protecting only expression. However, what happens when the "direction" of the idea is so precise that it constitutes the work itself? What happens when the author performs deep analysis and materializes the result in their mind? The Conceptual Art movement of the 1960s offers a theoretical solution to this question.

In his manifesto *Paragraphs on Conceptual Art* (1967), Sol LeWitt established a paradigm that challenges materialistic fetishism: "In conceptual art the idea or concept is the most important aspect of the work. When an artist uses a conceptual form of art it means that all of the planning and decisions are made beforehand and the execution is a perfunctory affair. The idea becomes a machine that makes the art" (LeWitt, 1967).

The contemporary artist using AI—under verifiable criteria that they performed the mental exercise and introduced various instructions to reach a desirable result—acts exactly within this parameter outlined by LeWitt and Marcel Duchamp (with his readymades). The human user designs the guiding axis, shapes the operational rules, introduces the logical context, and iterates negative constraints. Meanwhile, the neural network is simply the subroutine that executes the "perfunctory affair" of synthesizing pixels or probabilistic vectors.

The Mastermind doctrine and *Li v. Liu*

This aesthetic conceptualization is not alien to positive law, as demonstrated below.

Anglo-Saxon jurisprudence has consolidated, for over a century, the Mastermind doctrine and the principle of "causal superintendence." The photography case, *Burrow-Giles* (1884), defined the author as the intellectual originator, and since then, the use of cameras as tools has been protected by law.

This mandate was modernized and carried to its maximum expression in *Aalmuhammed v. Lee* (202 F.3d 1227, 9th Cir. 2000). In this dispute over joint-authorship rights for the film *Malcolm X*, the U.S. Court of Appeals for the Ninth Circuit determined that the author of a work of immense complexity is the individual (or corporate entity) who exercises "superintendence" over the whole; that is, who acts as the "mastermind" by retaining artistic control, dictating guiding directions, and possessing veto power over final integration, regardless of who materially executes the parts.

Analogous to an orchestral conductor who plays no physical instrument but orchestrates the unified work through control of variables, the AI engineer exercises superintendence and iterative veto over algorithmic outputs.

Does this sound like a prompt? Let's look at the comparison:

A film director does not draw every frame or manually calculate light refraction; nevertheless:

  • – Determines the visual style of the work (realistic, expressionist, minimalist).
  • – Defines the dominant color palette.
  • – Decides on the type of lighting (hard light, backlight, low key).
  • – Adjusts the composition of the frame (close-up, symmetry, depth of field).
  • – Indicates the narrative rhythm.
  • – Orders a scene to be repeated until the desired result is achieved.
  • – Discards takes that do not satisfy their vision.

They do not manipulate directly; they only direct the variables.

The prompt engineer performs an analogous operation:

  • – Defines the style (baroque oil, editorial photography, hyper-realistic render, conceptual art).
  • – Specifies lighting (“soft light,” “dramatic shadows,” “golden hour”).
  • – Adjusts colorimetry (“high saturation,” “muted tones,” “cool palette”).
  • – Determines composition (“wide angle,” “centered subject,” “rule of thirds”).
  • – Establishes format and aspect ratio (16:9, 1:1, vertical cinematic).
  • – Introduces negative constraints (no text, no brands, no distortions).
  • – Iterates results and eliminates versions that do not fulfill their intent.

They also do not directly manipulate every pixel; nevertheless, they direct the variables.

In both cases, the material execution—whether the camera, the CGI engine, or the neural network—operates as a technical instrument. Therefore, the difference lies not in the presence or absence of a machine, but in who retains substantial creative control and the final decision over the work.

This situation has begun to prevail in specialized courts: in November 2023, in the *Li v. Liu* ruling (Beijing Internet Court), Chinese justice formally recognized copyright for an image generated through the open-source platform Stable Diffusion. The Asian court meticulously analyzed the plaintiff's workflow and rejected the thesis of "mere mechanical attainment." It ruled that the deliberate selection of technical parameters, the structured design of positive and negative prompts, the iteration of the process, and final curation clearly evidenced the personality, aesthetic choices, and significant intellectual effort of the human creator.

For the court, advanced AI use fulfills the originality threshold required by copyright law when verifiable human direction exists.

In conclusion, while we agree that not every AI-generated work or invention should be protected, we consider that anyone who proves robust participation and direction should be considered an author or inventor.

The Mexican criterion: An underdeveloped opportunity

In the Mexican context, the discussion on intellectual property facing new technologies is still in an incipient stage. However, the recent intervention of the Supreme Court of Justice of the Nation in matters related to intellectual property opens a relevant space to rethink traditional criteria under which these problems are analyzed.

More than offering a definitive answer, the national context evidences an opportunity to build our own doctrine that allows addressing the relationship between artificial intelligence, creation, and ownership of rights without uncritically reproducing the conceptual errors observed in other jurisdictions.

Neurocognitive Biology: "The Prompting Brain"

Another argument for denying protection to works generated by artificial intelligence (as determined by the USCO) assumes that typing a text command for an AI is a trivial, fast, and intellectually passive act. However, this presumption was debunked by frontier neuroscience in 2025.

The functional magnetic resonance imaging (fMRI) study titled "The Prompting Brain: Neurocognitive Markers of Expertise in Guiding Large Language Models" (Al-Khalifa et al., arXiv, 2025) measured the metabolic and structural demand of the human brain when interacting with generative language models.

Preliminary neuroscientific studies suggest there are differences in brain activity associated with higher levels of mastery in interacting with artificial intelligence systems. Unlike novice users, expert AI operators showed statistically superior functional connectivity in regions critical for high-level abstract processing: the left middle temporal gyrus (MTG, key in complex semantic retrieval) and the left frontal pole of the prefrontal cortex.

The prefrontal lobe is the anatomical apex of human cognition, responsible for executive control, multi-step planning, hypothetical reasoning, goal-directed sequential behavior, and metacognition.

Biologically, the prompt engineer's brain is not performing a routine process; it is using its creativity at the highest level to "program" the artificial neural network through natural language.

The evidence demonstrates that the "ideation" of instructions is a cognitive task and biologically demonstrable. Therefore, AI generation is not passive as administrative authorities maintain; it is a deliberate act of superior human cognition that we can equate to the physical effort of the brushstroke that consecrated the painter's right.

Any person who has made a creative request to AI has necessarily had to think about or imagine the sought result—just as a painter, prior to painting their work, imagined the result they wanted to capture and who, during creation, introduces additional elements, removes others they did not like, and changes colors. The same occurs with the AI user: they make new requests that reflect what they idealized.

VI. Conclusions

The institutional requirement that authorship recognition and patent inventiveness depend on direct physical execution constitutes a historical error. This essay has demonstrated the pattern: the same prohibitionist logic that tried to penalize mechanical looms, criminalize anatomical dissection, and deny protection to photography reappears today, punctual and unreflective, facing generative artificial intelligence.

This bias, disguised as a humanist defense, contravenes the teleology that has sustained the global intellectual property system since its inception: promoting the progress of science and useful arts. When doctrine inverts that mandate and turns it into an obstacle, it has ceased to be law and become the ideology of inertia.

Current resistances are not immovable metaphysical doctrines. They are symptoms of a recurring moral panic that, in each historical cycle, the law has eventually overcome through regulation. The difference between prohibiting and regulating is not philosophical: it is the interval during which lives are lost, investments are halted, and the field is abandoned to those who move forward without restrictions.

The law does not have the option not to decide. Deciding late is also a decision, and its costs are paid by the public interest.

Therefore, the legislative solution admits no postponement. It requires adopting the inventive causality doctrine consolidated in 2024 by the German Federal Court of Justice (BGH, X ZB 5/22): AI-generated inventions are fully patentable when a human being proves to have instructed, caused, or validated the machine's inventive direction. This standard resolves the problem without granting rights to software—it saves the incentive system by preserving human ownership over the process, not over muscular execution.

In creative industries, the path is the precedent of the Beijing Internet Court (*Li v. Liu*, 2023): copyright protection proceeds when verifiable human direction exists—iterative design, deliberate parameterization, curation of the result. To reject that standard while recognizing full authorship for a director who delegates millions of calculations to a CGI engine is arbitrary discrimination that violates the principle of Technological Neutrality established in *CBC v. SODRAC* (2015 SCC 57).

The proposal is concrete: multilateral forums—foremost WIPO—and national legislators must reform current treaties and statutes to eliminate direct human fixation restrictions. In its place, a single standard of attribution founded on substantial and verifiable creative control must be instituted: the superintendence of the process, the iterative design of constraints, and the curatorial selection of the result as determining factors of intellectual parentage.

Regulating artificial intelligence will not destroy human authorship. It will free it from its material ties and carry it toward what it has always been in its highest form: pure ideation, strategic direction, the governance of the result. Denying that possibility does not protect the creator—it condemns them to demonstrate a physical activity that is sometimes irrelevant.

Each generation has faced its disruption, and the law, invariably, has found the way to regulate it without betraying progress. This generation will be no exception. The question is not whether it will happen, but how long it takes us to accept it—and what that time costs.

Lecturas relacionadas

VII. Bibliography

Jurisprudence

  • Aalmuhammed v. Lee, 202 F.3d 1227 (9th Cir. 2000).
  • Canadian Broadcasting Corp. v. SODRAC 2003 Inc., 2015 SCC 57.
  • Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53 (1884).
  • Supreme Court of Justice of the Nation (SCJN - Mexico). (2025). Amparo Directo 6/2025 (Second Chamber).
  • Beijing Internet Court (China). (2023). Li v. Liu, Civil Judgment (2023) Jing 0491 Min Chu No. 11279.
  • Federal Court of Justice (Bundesgerichtshof - BGH, Germany). (2024). DABUS Case, Resolution X ZB 5/22 of June 11, 2024.

Legislation and Institutional Reports

  • U.K. Parliament. (1812). Frame Breaking Act, 52 Geo. III c. 16.
  • U.K. Parliament. (1832). Anatomy Act, 2 & 3 Will. 4 c. 75.
  • United States Copyright Office (USCO). (2025). Copyright and Artificial Intelligence, Part 2: Copyrightability Report (Published January 29, 2025). Library of Congress.

Doctrine and Scientific Literature

  • Al-Khalifa, H. S., et al. (2025). The Prompting Brain: Neurocognitive Markers of Expertise in Guiding Large Language Models. arXiv preprint arXiv:2508.14869.
  • Binfield, K. (Ed.). (2004). Writings of the Luddites. Johns Hopkins University Press.
  • Boden, M. A. (2004). The creative mind: Myths and mechanisms (2nd ed.). Routledge.
  • Buccafusco, C. (2016). A Theory of Copyright Authorship. Virginia Law Review, 102, 1229-1295.
  • DiMasi, J. A., Grabowski, H. G., & Hansen, R. W. (2016). Innovation in the pharmaceutical industry: New estimates of R&D costs. Journal of Health Economics, 47, 20-33.
  • Fitzpatrick, M. (2005). The anti-vaccination movement in England, 1853–1907. Journal of the Royal Society of Medicine, 98(8), 384–385.
  • Ghosh, S. K. (2015). Human cadaveric dissection: a historical account from ancient Greece to the modern era. Anatomy & Cell Biology, 48(3), 153-169.
  • Ginsburg, J. C. (2020). Burrow-Giles v. Sarony (US 1884): Copyright protection for photographs, and concepts of authorship in an age of machines. Carolina Academic Press.
  • Jones, S. E. (2006). Against technology: From the Luddites to neo-Luddism. Routledge.
  • LeWitt, S. (1967). Paragraphs on Conceptual Art. Artforum, 5(10), 79–83.
  • Mueller, G. (2021). Breaking things at work: The Luddites are right about why you hate your job. Verso Books.
  • Thompson, E. P. (1963). The making of the English working class. Victor Gollancz.