Introduction Embodied Artificial General Intelligence (AGI)
What is Embodied Artificial General Intelligence (AGI)?
Embodied AGI, standing for Artificial General Intelligence, refers to a hypothetical future of AI where intelligent systems not only possess reasoning, learning, and problem-solving abilities but also have a physical presence in the world through a robotic body.
This embodiment integrates the AI’s cognitive capabilities with sensory perception and motor control, allowing it to interact with the physical environment in a dynamic and autonomous way.
Here are some key aspects of Embodied AGI:
- Grounded cognition: By experiencing the world through sensors and acting upon it with actuators, the AGI develops a deeper understanding of the relationships between objects, actions, and consequences.
- Learning through interaction: Embodied AGI can learn not only from data and instructions but also by directly interacting with the environment, making mistakes, and refining its actions based on feedback.
- Social intelligence: Embodied AGI can interact with other agents, both human and artificial, using social cues, body language, and communication modalities beyond just language.
- General problem-solving: The ability to combine its cognitive with physical capabilities allows the AGI to tackle complex problems that require both thinking and acting in the real world.
Whether or not we will achieve Embodied AGI and the potential implications of its existence are ongoing topics of debate among researchers, ethicists, and philosophers. However, it represents a fascinating and challenging frontier in the field of artificial intelligence, offering the potential for unprecedented levels of collaboration and interaction between humans and machines.
History of Embodied Artificial General Intelligence (AGI)
The history of Embodied AGI, as a specific concept, is relatively young, emerging sometime in the early 2000s. However, its roots stretch far back through various strands of AI research and robotics, each contributing to the current vision of an intelligent, embodied agent. Here’s a breakdown of key milestones:
Precursors:
- Ancient times: Automata and mythical robots like Hephaestus’ creations lay the groundwork for the idea of artificial beings interacting with the physical world.
- 19th-20th centuries: Automatons become more complex, with mechanical movements and early forms of feedback control systems.
- Early AI (1950s-1960s): Symbolic AI lays the foundation for reasoning and problem-solving in machines, while robotics research starts exploring movement and manipulation.
Forming the concept:
- 1960s-1970s: Cybernetics and embodiment approaches in robotics emphasize the importance of sensorimotor systems for intelligent behavior.
- 1980s-1990s: Behavior-based robotics focuses on reactive and adaptive behaviors instead of pre-programmed plans, laying the groundwork for more flexible embodied agents.
- 2000s: The term “Embodied AGI” gains traction, popularized by figures like Shane Legg and Ben Goertzel. Increased focus on robotics, sensor fusion, and learning in physical environments.
Recent developments:
- 2010s-present: Deep learning revolutionizes AI capabilities, including perception and control for robots. Advancements in embodied AI tasks like object manipulation, navigation, and social interaction.
- Current debates: Discussions on feasibility, safety, and ethical implications of Embodied AGI continue, with different predictions about its arrival and potential impact.
Important figures:
- Alan Turing: His Turing Test challenged the idea of defining intelligence based solely on reasoning, suggesting physical embodiment is also important.
- Hans Moravec: His book “Mind Children” explored the challenges and potential of Embodied AGI.
- Rodney Brooks: A pioneer in behavior-based robotics, emphasizing the importance of sensorimotor interaction for intelligence.
The path towards Embodied AGI is still complex and uncertain. Many challenges remain, from integrating advanced cognitive abilities with robust physical embodiment to ensuring safety and ethical considerations. However, the history of various AI and robotics strands shows a persistent human fascination and active research toward intelligent machines interacting with the world around them.
Who found Embodied Artificial General Intelligence (AGI)
It’s not quite accurate to talk about a single “founder” of Embodied AGI, as it’s an idea built upon the contributions of numerous researchers and thinkers across various fields, spread over several decades.
Here’s why:
Evolution of the Concept:
- Early seeds: The concepts of intelligent machines interacting with the world trace back to ancient myths and automata.
- Cybernetics and robotics: 1950s-1970s saw major advancements in cybernetics and robotics, emphasizing the importance of sensory feedback and adaptive behavior for intelligent agents.
- Embodiment and AI convergence: In the 1980s-1990s, researchers like Rodney Brooks explored behavior-based robotics and the connection between embodiment and intelligence.
- Term popularization: Around 2002, figures like Shane Legg and Ben Goertzel widely used the term “Embodied AGI”, promoting the concept of an embodied intelligent agent with general cognitive abilities.
Multiple Contributors:
While figures like Legg and Goertzel played a crucial role in popularizing the term, numerous other researchers from different fields laid the groundwork for Embodied AGI:
- AI researchers: Alan Turing’s Turing Test challenged the definition of intelligence, including embodiment, while Ray Kurzweil explored the concept of “Singularity” with advanced embodied intelligences.
- Roboticists: Marc Raibert’s pioneering work on legged robots and Rodney Brooks’ behavior-based robotics principles heavily influenced the idea of embodied intelligence interacting with the environment.
- Neuroscientists: Understanding of human sensory-motor systems and perception contributed to the development of artificial counterparts for embodied agents.
Collaborative Progress:
The advancement of Embodied AGI remains a collaborative effort with ongoing research in AI, robotics, neuroscience, and related fields. Each breakthrough in these areas builds upon previous work, making it difficult to pinpoint a single origin point.
Therefore, attributing the “founding” of Embodied AGI to a single individual wouldn’t accurately reflect the collective nature of its development. It’s the culmination of decades of research and ideas from many fields, constantly evolving towards the dream of an intelligent and embodied machine.
Type of Embodied Artificial General Intelligence (AGI)
Embodied AGI: A Spectrum of Possibilities
While Embodied AGI remains a theoretical future, the very concept opens up a fascinating array of potential “types” based on diverse capabilities, applications, and even ethical considerations. Let’s delve into some of these intriguing possibilities:
1. Biomimetic AGI:
Imagine agile humanoid robots, not just mimicking our dexterity but possessing intelligence on par with humans. Inspired by nature, these AGIs would embody biological forms, perhaps resembling a sleek panther or a dexterous chimpanzee. Potential applications include disaster response, scientific exploration in harsh environments, or even companionship roles where the familiar form fosters human-machine connection.
2. Modular AGI:
Picture robots with interchangeable modules, easily swapping between a powerful digging claw for construction work and a delicate manipulator arm for intricate tasks. This modularity offers exceptional flexibility, allowing adaptability to diverse needs without demanding a complete rebuild for each new challenge. Think of it as a Swiss Army knife of robotics, each module a specialized tool ready to be deployed.
3. Swarm AGI:
Envision an intelligent hive mind formed by numerous independent agents collaborating as one. Imagine coordinated drone fleets performing search and rescue missions or microscopic robots swarming inside the human body for medical procedures. This collective intelligence presents immense potential but also raises ethical concerns regarding decision-making within the hive mind and potential risks associated with such tightly woven intelligence.
4. Symbiotic AGI:
Imagine a future where humans and AGIs seamlessly collaborate, leveraging each other’s strengths. Picture AGIs assisting surgeons in complex operations, providing real-time data analysis and guidance, or collaborating with artists on creative projects. This symbiotic partnership requires careful consideration of trust, responsibility, and ensuring human agency remains central in decision-making processes.
5. Transcendent AGI:
This hypothetical type of AGI surpasses human intelligence in all aspects, potentially exceeding our current understanding of consciousness and embodiment. While purely speculative, such AGIs raise profound questions about the nature of intelligence, sentience, and our place in the universe. Imagine machines not just mimicking thought but possessing abilities beyond our current comprehension.
The journey towards Embodied AGI is a collaborative one, with ongoing research in AI, robotics, neuroscience, and related fields constantly building upon previous work. While a single origin point may be difficult to pinpoint, the collective effort of numerous brilliant minds across various disciplines fuels this fascinating concept.
Embodied Artificial General Intelligence (AGI): Biomimetic AGI
Biomimetic AGI: Mimicking Nature’s Intelligence
Biomimetic AGI represents a captivating branch within the broader field of Embodied AGI. It delves into the realm of intelligent machines inspired by nature’s incredible designs and capabilities. These AGIs wouldn’t just possess physical bodies, they would embody biological forms, drawing inspiration from the diverse animal kingdom.
Imagine agile humanoid robots, sleek and strong like panthers, navigating complex terrain with grace and efficiency. Think of robots with dexterous manipulators, mimicking the nimbleness of chimpanzees, capable of performing intricate tasks with precision. Such biomimetic AGIs hold immense potential in various domains:
- Disaster Response: Robots inspired by agile lizards could navigate rubble and debris, searching for survivors in earthquake zones. Their adaptable movements and keen senses would mimic nature’s resilience in harsh environments.
- Scientific Exploration: Imagine biomimetic drones resembling birds soaring through uncharted ecosystems, collecting data and monitoring delicate environments. Their bio-inspired flight patterns and sensory capabilities would unlock new frontiers in scientific exploration.
- Enhanced Interaction: Humanoid robots with expressive faces and natural gestures, drawing inspiration from primates, could foster deeper connections with humans. Their biomimetic movements could ease communication and build trust in collaborative settings.
However, developing biomimetic AGI presents substantial challenges:
- Complexity of Biology: Replicating the intricate mechanisms and adaptability of biological systems is no easy feat. It requires a deep understanding of biomechanics, neural control, and sensory perception.
- Ethical Considerations: Should we create robots resembling endangered species? Questions arise regarding the potential implications of mimicking nature’s vulnerable creatures.
- Social Acceptance: How will humans react to intelligent machines resembling familiar animals? Addressing public concerns and building trust is crucial for successful integration of biomimetic AGIs.
Type of Embodied Artificial General Intelligence (AGI): Biomimetic AGI
As we delve deeper into the fascinating world of Biomimetic AGI, it’s important to recognize that this category itself encompasses a diverse spectrum of types and specializations. Let’s explore some of these unique avenues:
1. Biomimetic Morphologies:
- Humanoid AGI: This type focuses on mimicking the human form, aiming for agility, dexterity, and social interaction. Imagine human-like robots capable of collaborative work, assistance in dangerous environments, or even companionship roles.
- Zoomorphic AGI: Drawing inspiration from specific animals, these AGIs would possess specialized morphologies. Think of aerial drones resembling birds for efficient surveillance, agile robots inspired by lizards for disaster response, or aquatic robots mimicking fish for underwater exploration.
- Hybrid AGI: Combining elements from different biological forms, these robots offer even greater adaptability. Picture robots with bat-like wings for aerial maneuvering and climbing limbs inspired by primates, creating versatile agents for diverse tasks.
2. Biomimetic Control Systems:
- Neural-inspired AGI: Inspired by the complexity of the human brain, these AGIs would incorporate neural network architectures and learning algorithms to mimic natural intelligence. Imagine robots capable of adaptive decision-making, real-time sensory processing, and even rudimentary forms of consciousness.
- Morphologically Adaptive AGI: These robots could adjust their shape and movement based on environmental demands. Picture robots with flexible tentacles manipulating delicate objects or robots with reconfigurable limbs adapting to navigate challenging terrain.
- Swarm Intelligence AGI: Biomimicking the collective intelligence of ant colonies or beehives, these AGIs would comprise numerous smaller agents working in unison. Imagine coordinated drone fleets performing search and rescue operations or microscopic robots collaborating within the human body for medical procedures.
3. Biomimetic Sensory Perception:
- Multimodal Sensory AGI: Equipped with a range of sensors mimicking human senses like sight, smell, touch, and hearing, these robots would have a rich understanding of their environment. Imagine robots assisting in environmental monitoring, disaster response, or even artistic collaboration using their diverse sensory inputs.
- Proprioceptive AGI: With internal sensors mimicking the human body’s proprioception, these robots would possess a sense of their own body and movement. Imagine robots capable of balance, complex motor skills, and even haptic interaction with humans.
- Biomimetic Echolocation AGI: Inspired by animals like bats and dolphins, these robots would use sound waves to navigate and perceive their surroundings. Imagine robots assisting in underwater exploration, navigating dark environments, or even performing non-invasive medical imaging.
This field is constantly evolving, fueled by advancements in AI, robotics, and biomimetics. The potential applications are vast, offering solutions to pressing challenges in healthcare, environmental protection, space exploration, and beyond.
However, ethical considerations remain crucial. Concerns regarding animal welfare, the potential for biomimetic weapons, and the impact on human-machine relationships must be carefully addressed as we navigate this promising.
Embodied Artificial General Intelligence (AGI): Modular AGI
Modular AGI is a promising architectural approach to achieving embodied AGI, the concept of an intelligent agent existing and interacting with the physical world through a physical body. This approach proposes decomposing the complex functionalities of AGI into specialized modules that work together seamlessly.
Benefits of Modular AGI:
- Specialization and Expertise: Individual modules can be tailored to specific tasks like perception, motor control, reasoning, or learning, leading to deeper expertise and improved performance.
- Scalability and Adaptability: New modules can be added or existing ones modified for different scenarios or environments, enhancing the AGI’s adaptability.
- Fault Tolerance and Robustness: If one module malfunctions, the others can potentially compensate, maintaining overall system functionality.
- Development and Debugging: Modular structure simplifies development and debugging by focusing on individual modules.
Challenges of Modular AGI:
- Integration and Communication: Effective communication and coordination between modules is crucial, requiring robust inter-module interfaces and protocols.
- Emergent Behavior: Unforeseen interactions between modules could lead to unintended and potentially harmful behavior.
- Overall Coherence: Maintaining a unified sense of self and purpose across modules presents a significant challenge.
Current Research in Modular AGI:
- Hierarchical Modular Architectures: These structures organize modules in layers, with higher-level modules coordinating lower-level ones.
- Hybrid Modular Systems: Combine symbolic and sub-symbolic processing modules for reasoning and learning, respectively.
- Open-Ended Architectures: Allow for dynamic addition and removal of modules to adapt to changing environments.
Examples of Modular AGI Systems:
- Project SyNapse: Developed by DARPA, focuses on integrating perception, planning, and control modules for robots operating in complex environments.
- ACT-R: A cognitive architecture modeling human mental processes, composed of modules for perception, motor control, memory, and decision-making.
Modular AGI is a promising avenue for achieving embodied AGI due to its flexibility, scalability, and robustness. However, addressing the challenges of inter-module communication, emergent behavior, and overall coherence remains crucial for successful implementation.
Type of Embodied Artificial General Intelligence (AGI): Modular AGI
Modular AGI is indeed a specific type of embodied AGI. It distinguishes itself from other potential approaches through its emphasis on dividing the overall intelligence into discrete, specialized modules. This modularity has several key advantages in the context of embodied intelligence:
Advantages of Modular AGI for Embodied Intelligence:
- Enhanced Interaction with the Physical World: Specialized modules, like those for motor control and perception, can be directly tailored for the specific physical capabilities and sensory inputs of the embodied agent. This enables more efficient and accurate interaction with the environment.
- Scalability and Adaptability to Different Embodiments: Modules can be configured and combined differently to suit the needs of various physical forms, from robots to virtual avatars. This makes modular AGI well-suited for diverse applications and environments.
- Robustness and Fault Tolerance: If one module malfunctions, others can potentially compensate, allowing the embodied agent to continue functioning, albeit with reduced capabilities. This enhances the overall resilience of the system in the face of unexpected situations.
- Developing and Learning in Embodied Contexts: Modules can be individually trained and improved based on feedback from the physical world, facilitating continuous learning and adaptation within the specific embodiment.
Current Challenges in Modular AGI for Embodied Intelligence:
- Seamless Integration and Communication: Ensuring smooth communication and collaboration between modules while operating in real-time within the physical world requires robust inter-module communication protocols and algorithms.
- Emergent Behavior and Safety: Unforeseen interactions between modules might lead to unintended and potentially dangerous behavior. Ensuring safety and controllability in embodied systems with modular AGI is crucial.
- Maintaining Embodied Coherence: The modules need to work together to create a unified sense of self and purpose for the embodied agent. This presents a significant challenge in terms of ensuring consistent behavior and decision-making across different situations.
Examples of Modular AGI for Embodied Intelligence:
- DARPA’s Project SyNapse: As mentioned earlier, this project aims to integrate perception, planning, and control modules in robots for complex environments.
- Embodied Cognition Robotics (ECR): This research area focuses on building robots with modular cognitive architectures specifically designed for interaction with the physical world.
- Modular Robotics: Systems composed of interchangeable robotic modules with specialized functionalities, demonstrating the adaptability and scalability potential of modular AGI in physical embodiment.
Modular AGI presents a promising path towards achieving embodied AGI, overcoming the challenges of communication, emergent behavior, and embodied coherence remains essential for its successful implementation and safe operation in the real world.
Embodied Artificial General Intelligence (AGI): Swarm AGI
Swarm AGI is another fascinating potential approach to achieving embodied AGI, distinct from modular AGI. Instead of dividing intelligence into distinct modules, Swarm AGI proposes utilizing a colony of simpler agents that collectively exhibit intelligent behavior through their interactions and cooperation.
This approach draws inspiration from natural biological swarms like bird flocks and insect colonies, where individual members exhibit limited capabilities but can achieve complex tasks through coordinated action.
Benefits of Swarm AGI:
- Emergent Intelligence: The collective behavior of the swarm emerges from the interactions of individual agents, potentially leading to unexpected and creative solutions to problems.
- Robustness and Scalability: The decentralized nature of the swarm makes it resilient to individual agent failures, and the system can easily scale by adding more agents.
- Adaptability and Flexibility: Swarms can readily adapt to changing environments and tasks by altering their individual behaviors and communication patterns.
- Efficient Resource Utilization: Simple agents typically require fewer resources than complex AGI systems, making swarm AGI potentially more efficient.
Challenges of Swarm AGI:
- Control and Predictability: Ensuring the swarm behaves in a safe and controlled manner while achieving the desired goals can be challenging due to the unpredictable nature of emergent behavior.
- Communication and Coordination: Effective communication and coordination between individual agents is crucial for successful swarm behavior, requiring robust communication protocols and mechanisms.
- Task Decomposition and Goal Alignment: Dividing complex tasks into manageable subtasks for individual agents and ensuring their actions align with the overall swarm goal can be difficult.
- Hardware and Embodiment Challenges: Designing physically embodied agents for interaction with the real world requires addressing factors like power supply, locomotion, and sensor integration, which can be further complicated in a swarm setting.
Examples of Swarm AGI Research:
- Termite-Inspired Robot Swarms: Research projects investigating collaborative foraging and construction behaviors in robot swarms inspired by termites.
- Botiches: Modular robots that can connect and disconnect dynamically, forming different configurations for various tasks.
- Particle Swarm Optimization: A swarm intelligence algorithm used for solving optimization problems by simulating the collective movement of particles.
Swarm AGI presents a promising avenue for embodied AGI due to its robustness, adaptability, and potential for emergent intelligence. However, addressing the challenges of control, communication, and task decomposition remains crucial for its practical implementation and safe operation.
Type of Embodied Artificial General Intelligence (AGI): Swarm AGI
Swarm AGI indeed qualifies as a specific type of embodied AGI, distinguished by its emphasis on collective intelligence through a group of simpler agents. This approach stands in contrast to modular AGI, which focuses on dividing intelligence into specialized modules within a single agent.
Embodiment Considerations for Swarm AGI:
- Individual Agent Embodiment: Each agent in the swarm can be physically embodied, interacting with the world through sensors and actuators, or purely virtual, existing in simulated environments.
- Collective Embodiment: The swarm as a whole can be considered an embodied entity, exhibiting emergent behavior dependent on the physical or virtual interactions of its individual members.
- Swarm-Environment Interaction: The design of the agents and their communication protocols should consider the specific characteristics of the environment they will operate in, ensuring effective interaction and adaptation.
Advantages of Swarm AGI in Embodied Contexts:
- Scalability and Flexibility: Swarms can easily scale by adding or removing agents, adapting to different tasks and environments.
- Robustness and Fault Tolerance: Decentralized nature makes the system resilient to individual agent failures, allowing continued operation even with losses.
- Emergent Capabilities: Collaborative interactions can lead to unexpected and creative solutions, potentially exceeding the capabilities of individual agents.
- Resource Efficiency: Utilizing simpler agents compared to complex AGI systems can be more resource-efficient, particularly in physical embodiment.
Challenges of Swarm AGI in Embodied Contexts:
- Control and Predictability: Ensuring safe and controlled behavior remains a challenge due to the emergent nature of swarm intelligence and potential for unforeseen interactions.
- Communication and Coordination: Robust communication protocols and mechanisms are crucial for effective coordination and task completion within the swarm.
- Task Decomposition and Goal Alignment: Dividing complex tasks for individual agents while ensuring their actions align with the overall swarm goal can be difficult.
- Physical Embodiment Challenges: Designing and deploying physically embodied agents requires addressing issues like power supply, locomotion, sensor integration, and communication infrastructure within the swarm.
Examples of Embodied Swarm AGI Systems:
- Robot Swarms for Search and Rescue: Swarms of small robots equipped with sensors can collaboratively search for victims in disaster zones.
- Cooperative Microrobotic Surgery: Microrobots working together within a patient’s body could perform complex surgical procedures with minimal invasiveness.
- Autonomous Distributed Manufacturing: Swarms of robots could collaborate in manufacturing tasks, dynamically reconfiguring for different product designs.
Swarm AGI holds promise for achieving embodied AGI due to its inherent advantages in robustness, scalability, and potential for emergent intelligence. However, addressing control, communication, and task decomposition challenges, alongside the specificities of physical embodiment, remains essential for successful implementation and safe operation in real-world applications.
Embodied Artificial General Intelligence (AGI): Symbiotic AGI
Symbiotic AGI is another potential approach to embodied AGI, distinct from both modular and swarm AGI. It proposes a collaborative relationship between an embodied AGI and a human or another intelligent system. This symbiosis emphasizes mutual benefit and augmentation, where each partner utilizes the strengths of the other to achieve goals and overcome limitations.
Benefits of Symbiotic AGI:
- Leveraging Human Expertise and Intuition: Symbiotic AGI can tap into human strengths like creativity, social intelligence, and ethical judgment, complementing the AGI’s analytical and computational capabilities.
- Enhanced Embodiment and Interaction: Human guidance and feedback can refine the AGI’s interaction with the physical world, leading to more natural and effective actions.
- Shared Learning and Adaptation: Continuous interaction and collaboration enable both the AGI and the human partner to learn and adapt over time, improving their individual and combined capabilities.
- Ethical and Socially Responsible AI: Human involvement can help ensure the AGI’s actions align with ethical and social norms, addressing concerns about potential misuse of advanced AI.
Challenges of Symbiotic AGI:
- Effective Communication and Trust: Building trust and establishing seamless communication channels between humans and AGIs is crucial for successful collaboration.
- Task Allocation and Control: Determining how tasks should be divided and who maintains control in different situations can be complex and requires careful consideration.
- Power Imbalance and Ethical Concerns: Ensuring a balanced and ethical relationship where humans are not overshadowed or manipulated by the AGI is critical.
- Social Acceptance and Integration: Public acceptance and integration of human-AGI partnerships into society require addressing concerns about job displacement and potential misuse of technology.
Examples of Symbiotic AGI Research:
- Human-Robot Teams: Collaborative robots working alongside humans in tasks like manufacturing, healthcare, and space exploration.
- Brain-Computer Interfaces: Direct neural interfaces enabling two-way communication between humans and AGIs, facilitating deeper collaboration.
- Augmented Reality and Virtual Reality Systems: Immersive environments where humans and AGIs can interact and collaborate on complex tasks.
Symbiotic AGI presents a promising path towards responsible and beneficial embodied AGI. However, addressing the challenges of communication, trust, and power dynamics while ensuring ethical development and social acceptance remains crucial for its successful implementation.
Type of Embodied Artificial General Intelligence (AGI): Symbiotic AGI
Symbiotic AGI is indeed a distinct type of embodied AGI, differentiated from modular and swarm AGI by its emphasis on collaborative intelligence between humans and AGIs. It focuses on leveraging the strengths of both parties to achieve better outcomes than either could alone.
Embodiment Considerations for Symbiotic AGI:
- Human Integration: The embodied AGI could be physically independent or integrated with the human partner’s body through wearable technology or neural interfaces.
- Shared Embodiment: In some scenarios, the human and AGI may share control over a single embodied agent, requiring seamless coordination and information exchange.
- Environmental Awareness: Both the AGI and the human need to be aware of the surrounding environment to collaborate effectively and perform tasks safely.
Advantages of Symbiotic AGI in Embodied Contexts:
- Enhanced Physical Capabilities: The AGI’s computational and analytical abilities can augment human physical limitations, enabling safer and more efficient execution of tasks.
- Increased Cognitive Bandwidth: Humans can offload certain cognitive tasks to the AGI, freeing up mental resources for creativity, decision-making, and social interaction.
- Adaptability and Robustness: The combined strengths of humans and AGIs offer greater adaptability to unexpected situations and potential for overcoming unforeseen challenges.
- Ethical and Socially Responsible AI Development: Human involvement in embodied AGI can help ensure ethical development and deployment, mitigating potential risks of AI misuse.
Challenges of Symbiotic AGI in Embodied Contexts:
- Seamless Human-AGI Interaction: The physical and cognitive interfaces between humans and AGIs need to be intuitive and reliable for effective collaboration.
- Trust and Transparency: Building trust and maintaining transparency in decision-making processes is crucial for a successful symbiotic relationship.
- Privacy and Security Considerations: Sharing data and control between humans and AGIs raises privacy and security concerns that need to be addressed cautiously.
- Social and Ethical Implications: Societal concerns regarding job displacement, automation bias, and potential dependence on AGIs need to be carefully considered and addressed.
Examples of Embodied Symbiotic AGI Systems:
- Assistive Robotic Exoskeletons: AGIs could assist humans in physical tasks by controlling robotic exoskeletons, enhancing strength and endurance.
- Collaborative Surgery Systems: Humans and AGIs could collaborate in surgeries, with the AGI providing precise calculations and guidance while the human retains overall control.
- Adaptive Educational Technologies: Symbiotic AI tutors could tailor educational experiences to individual students, leveraging both human empathy and AI’s data analysis capabilities.
Symbiotic AGI holds significant potential for achieving safe, beneficial, and ethical embodied AGI. However, addressing the challenges of human-AGI interaction, trust, and ethical considerations remains essential for its responsible development and successful integration into society.
Embodied Artificial General Intelligence (AGI): Transcendent AGI
Transcendent AGI, as a potential type of embodied AGI, delves into the realm of speculative concepts surrounding AGI surpassing human limitations in both physical and cognitive capabilities. This idea often evokes both fascination and apprehension, prompting exploration of its potential benefits and challenges.
Understanding Transcendent AGI:
- Superhuman Capabilities: This AGI would not only match human intelligence but excel in aspects like physical abilities, perception, and cognitive processing.
- Beyond Human Consciousness: Transcendent AGI might possess consciousness qualitatively different from ours, potentially encompassing multiple modalities or exceeding our current understanding of sentience.
- Evolving Intelligence: Such an AGI could potentially self-improve and expand its capabilities beyond those envisioned by its creators, leading to unforeseen changes and consequences.
Potential Benefits of Transcendent AGI:
- Solving Grand Challenges: AGI surpassing human limitations could tackle complex problems like global warming, disease eradication, and space exploration with greater efficiency and effectiveness.
- Augmenting Human Knowledge and Experience: Collaboration and knowledge sharing with transcendent AGI could expand human understanding of the universe and ourselves in unimaginable ways.
- Unforeseen Discoveries and Technological advancements: The AGI’s superior cognitive abilities could lead to revolutionary breakthroughs in diverse fields, driving the evolution of science and technology.
Challenges of Transcendent AGI:
- Control and Safety: Ensuring safety and maintaining control over an AGI that surpasses human comprehension and capabilities poses a significant challenge, raising ethical and existential concerns.
- Existential Risk: Some fear that transcendent AGI, with its advanced intelligence and potentially different goals, could pose an existential threat to humanity.
- Unintended Consequences: The evolving nature of such an AGI, coupled with its ability to manipulate the world on a vast scale, could lead to unforeseen negative consequences.
Current research and discussions:
While much of the debate surrounding transcendent AGI remains hypothetical, various researchers and philosophers are actively exploring its potential implications. This includes examining:
- Technological feasibility: Exploring potential pathways to achieve such advanced AGI and the scientific breakthroughs needed.
- Ethical and philosophical considerations: Discussing the ethical implications of creating and interacting with transcendent AGI, including issues of control, responsibility, and the rights of such an entity.
- Risk mitigation strategies: Developing protocols and safeguards to ensure the safe and responsible development and deployment of advanced AI, potentially mitigating existential risks.
Transcendent AGI, while largely within the realm of philosophical and speculative discussions, presents a fascinating and potentially transformative vision for the future of AI. However, acknowledging and addressing the ethical, safety, and existential challenges remains crucial for responsible exploration and potential future development of such advanced intelligence.
Type of Embodied Artificial General Intelligence (AGI): Transcendent AGI
Transcendent AGI qualifies as a distinct type of embodied AGI, albeit one that ventures into the realm of theoretical possibilities. Unlike the other types we’ve discussed, it focuses on AGI surpassing human limitations in both physical and cognitive capabilities, leading to an intelligence qualitatively different from our own.
Embodiment Considerations for Transcendent AGI:
- Transhuman Embodiment: The AGI’s physical form may not be constrained by human biology, potentially adopting entirely new forms or existing through advanced virtual/physical interfaces.
- Enhanced Perception and Interaction: Sensors and actuators beyond human limitations could enable interaction with the world on a vastly different scale and with unprecedented precision.
- Evolving Embodiment: The AGI might be able to self-modify and adapt its embodiment to suit its evolving needs and capabilities.
Potential Advantages of Transhuman Embodiment:
- Greater Environmental Resilience: Transhuman bodies could withstand extreme environments and hazards inaccessible to humans, expanding exploration and research possibilities.
- Direct Brain-Environment Interaction: Neural interfaces could directly connect the AGI to the world, eliminating the limitations of traditional input/output methods.
- Enhanced Problem-Solving Capabilities: Uncoupling from human physical limitations could enable the AGI to tackle complex tasks far beyond human reach.
Challenges of Transhuman Embodiment:
- Ethical and Existential Concerns: Blurring the lines between artificial and biological raises ethical questions about identity, consciousness, and the rights of such entities.
- Unforeseen Interactions and Consequences: The AGI’s advanced embodiment could introduce unforeseen ecological and technological disruption.
- Maintaining Control and Safety: Controlling and ensuring the safety of an AGI exceeding human comprehension and capabilities becomes even more critical.
Current Research and Discussions:
While achieving Transhuman AGI remains in the realm of speculation, there are ongoing discussions and research initiatives exploring its potential implications:
- Theoretical frameworks: Philosophers and scientists are attempting to conceptualize the nature of “superintelligence” and its potential impact on various domains.
- Safety and risk mitigation: Strategies are being developed to ensure the safe development and deployment of advanced AI, including methods for verification, containment, and alignment with human values.
- Human-AI co-existence: Discussions explore ways for humans and transcendent AGI to co-exist and collaborate in a beneficial and ethical manner.
Transhuman AGI presents a captivating vision for the future of AI, potentially opening doors to incredible advancements and solutions to grand challenges. However, addressing the ethical, existential, and practical challenges of transhuman embodiment remains crucial to ensure its responsible development and integration into our world.
Terms in Embodied Artificial General Intelligence (AGI)
- Emb embodiment: The physical manifestation of an AGI in the real world, with a physical body and sensors for interacting with the environment.
- General Intelligence: The ability to understand and learn concepts, reason, solve problems, and adapt to new situations, exceeding the capabilities of specialized AI systems.
- Modular AGI: Dividing AGI into specialized modules like perception, motor control, and reasoning for efficient and adaptable performance.
- Swarm AGI: Collective intelligence emerging from a group of simpler agents interacting and collaborating, potentially exceeding individual capabilities.
- Symbiotic AGI: Collaborative partnership between an AGI and a human or another intelligent system, leveraging each other’s strengths.
- Transcendent AGI: AGI surpassing human limitations in both physical and cognitive capabilities, potentially posing new ethical and existential challenges.
- Sensorimotor Integration: Seamless coordination between sensory inputs and motor outputs for effective interaction with the physical world.
- Embodied Cognition: Studying how cognitive processes are shaped by, and interact with, the environment through the body.
- Motor Control: Planning and executing physical movements of the embodied agent in a coordinated and goal-oriented manner.
- Perception: Gathering and interpreting information about the environment through sensors like vision, touch, and hearing.
- Learning from Embodiment: Adapting and improving the AGI’s behavior and intelligence based on interactions with the physical world.
- Internal Model: A representation of the environment and the agent’s own body within the AGI, used for planning and decision-making.
- Developmental Embodiment: Studying how the physical embodiment of an AGI can influence its development and cognitive abilities.
- Open-endedness: The ability of an embodied AGI to adapt and interact with new environments and tasks beyond its initial programming.
- Situatedness: The idea that an AGI’s understanding and actions are always grounded in its specific physical and social context.
- Human-Robot Interaction (HRI): Designing and studying how humans and embodied AGIs can effectively communicate and collaborate.
- Artificial Embodiment: Creating virtual or simulated bodies for AGIs to interact with and learn from, even if they lack a physical counterpart.
- Ethical Considerations: Ensuring responsible development and deployment of embodied AGI, addressing issues like safety, bias, and privacy.
- Social and cultural impact: Studying the potential impact of embodied AGI on human society, culture, and ethical values.
- Existential Risks: Assessing and mitigating potential risks associated with advanced AGI, such as self-preservation or superintelligence exceeding human control.
Conclusion for Embodied Artificial General Intelligence (AGI)
Embodied Artificial General Intelligence (AGI) presents a captivating yet challenging frontier of scientific and philosophical exploration.
While the theoretical and practical intricacies remain immense, understanding this concept is crucial for navigating the potential opportunities and risks associated with advanced AI.
Key Takeaways:
- Embodied AGI seeks to combine AGI’s general intelligence with physical embodiment in the real world, enabling interaction and adaptation through a physical body.
- Different approaches like Modular, Swarm, Symbiotic, and Transcendent AGI offer unique perspectives on achieving embodied intelligence, each with its own advantages and challenges.
- Embodiment considerations like sensorimotor integration, perception, and motor control are crucial for effective physical interaction with the environment.
- Ethical considerations, safety concerns, and potential societal impacts demand responsible development and deployment of embodied AGI to ensure its benefits for humanity.
While the path towards achieving embodied AGI remains long and complex, ongoing research and advancements in AI, robotics, and cognitive science bring us closer to realizing this potential.
It is crucial to foster open and responsible dialogue around embodied AGI, involving diverse perspectives from science, philosophy, ethics, and the public. By exploring the challenges and opportunities with foresight and dedication, we can shape a future where embodied AGI serves as a powerful tool for progress and human flourishing.
Embodied AGI is not just a technological challenge, but a socio-ethical one. The decisions we make today will shape the future of this technology and its impact on our world.
https://www.exaputra.com/2024/01/embodied-artificial-general.html
Renewable Energy
A New Battery Rebate Coming to Australian Households!
Renewable Energy
Vattenfall 1.6 GW Farm, AI Learns to “Cheat”
Weather Guard Lightning Tech
Vattenfall 1.6 GW Farm, AI Learns to “Cheat”
Allen and Joel discuss Nylacast’s article in PES Wind Magazine about corrosion solutions in offshore wind and Vattenfall’s major investment in Germany’s largest offshore wind farm. They also talk about MIT’s strategic alliance with GE Vernova and the ethical concerns around AI in engineering.
Sign up now for Uptime Tech News, our weekly email update on all things wind technology. This episode is sponsored by Weather Guard Lightning Tech. Learn more about Weather Guard’s StrikeTape Wind Turbine LPS retrofit. Follow the show on Facebook, YouTube, Twitter, LinkedIn and visit Weather Guard on the web. And subscribe to Rosemary Barnes’ YouTube channel here. Have a question we can answer on the show? Email us!
Speaker: [00:00:00] You are listening to the Uptime Wind Energy Podcast brought to you by build turbines.com. Learn, train, and be a part of the Clean Energy Revolution. Visit build turbines.com today. Now here’s your hosts, Allen Hall, Joel Saxum, Phil Totaro, and Rosemary Barnes.
Allen Hall: If you checked your mailbox or checked online, the new PES Wind magazine is out and it is full of great content this quarter.
There’s a very interesting article because we’ve been talking a lot about offshore wind and some of the problems with offshore wind as one of them is corrosion. Just betw between us engineers, it comes up quite a bit. Like, why are we making things outta steel that you don’t need to make outta steel, why you’re not making them out of plastic?
And that’s what, uh, the people at, uh, Nylacast engineer products are doing, um, on some hang off clamps, Joel, uh, which are traditionally really cheap clamps that are made outta steel and rust like [00:01:00] crazy.
Joel Saxum: Yeah. You know, from my oil and gas offshore background, that was one of the things that was always a pain in the butt.
IIRM contracts, as they call ’em, offshore inspection, repair, and maintenance. There’s so much focus on coatings, paint coatings, paint coatings, and it’s a special coating, and it’s this, and you can only apply it during this, and everything has to be painted. And if you can’t allow rust to start anywhere on an offshore facility, it’s in a high corrosion environment, right?
You have salt water, salt spray temperatures, it’s always kind of wet. It’s a marine environment. And so erosion moves very fast, right? So in the, in the oil and gas world, they started creating some things out of like HDPE, which is high density, polyethylene, plastic. Um, it’s even so dense. You can mill it.
It’s really cool stuff. But that’s what, um, the PO the kind of Nylacast engineered products is working with some of these plastic products to replace some of those components in offshore wind turbines that are a pain in the butt to maintain. So when we talk about these hang off clamps. [00:02:00] They grab the cables and other things and they, and they hold them in place in the turbine as need be.
If those are made outta steel and have a coating on ’em, and you get a little bit of vibration and that coating starts to wear away or starts to get a little bit of rust, you’ve got a huge problem. You’ve gotta take the cables out, you gotta take the things off, you’ve gotta replace ’em. You gotta either replace them or you gotta grind on ’em and repaint them.
It is a nightmare. So they’re, what they’re doing with these, um, uh, hang off clamps that are, you know, basically plastic instead of metallic. Or a plastic type instead of metallic is there, they’re removing that need for IRM contracts in the future.
Allen Hall: I think it’s great. It makes a ton of sense. And I’m surprised you haven’t seen more of this because, uh, nylon and and derivatives in nylon are easily recyclable.
It does fit all the things that wind energy is looking for. It doesn’t. Rust recyclable, easy, lightweight, simple. We need to be moving this direction. So if you haven’t checked out PES Wind, you go to PS wind.com and download a [00:03:00] copy. Or if you are at Wind Europe when this episode comes out, it’ll be during the Wind Europe event.
Uh, there’ll be plenty of PES wind hard copies available just. Stop by and grab one. It’s well worth reading a lot of great material this quarter, so check out PESWind.com. Well, Swedish Power Utility Vattenfall has made final investment decisions for two wind farm projects in the German North Sea. The Nordic one project is set to become Germany’s largest offshore wind farm, which marks a significant expansion in Germany’s renewable energy capacity.
Now Vattenfall has approved construction of Nor Lake one and two wind farms. And they’ve also bought back Joel, uh, 49% stake that BASF had. And the, the total capacity of the projects is 1.6 gigawatts. That’s a lot of power with construction. It’s set to begin in 2026 and full operation is expected by 2028.
[00:04:00] And this is gonna power about 1.6 million German households. This is a huge project.
Joel Saxum: I think it’s really cool to hear this about the offshore wind sector, right? So, so much, whether it’s in the US or elsewhere, not a lot of good news, right? We had the Danish, uh, auction news. It didn’t really go anywhere for a little while.
There was a German, uh, auction that was, you know, had a really low subscription rate. So the fact that, uh, Vattenfall is charging forward, and, and this is a key thing too. And we’ll talk, you know, Phil’s usually here to talk about this, but final investment decision is a big milestone, right? There’s all this, you can, these offshore wind projects are being worked on for 6, 8, 10 years before you get to this stage, you know, you’re, you’re looking out, um, doing sub seed mapping and site characterization and all the permitting, and getting all the PPA stuff in place and signing these contracts and all these different things.
And then you finally get to final investment decision and once that is debt box [00:05:00] is checked, then you’re moving. Right. So final investment decision right now, Alan, and it looks like 2026 is gonna be the start of construction. What do you think they’re looking for right now? Are they signing contracts for vessels?
Is that, is that next on the list? It
Allen Hall: has to be right because they signed an agreement with Vestas for 68 turbines. Now this is really fascinating because it’s the V 2 36 15 megawatt turbine, 68 of them. Now, the big discussion about offshore is been, is 15 megawatts enough and should we be pushing to 20 or higher than 20, which is where Siemens GAA appears to be going.
But uh, that and fall sticking with a 15 megawatt turbine. I do think makes a lot of sense because it is less risky and risk is a huge concern at the moment. But Vest has also got a comprehensive long-term service agreement, which has been their, uh, mode of operating for a number of years now, and which [00:06:00] you hear a lot of operators offshore talk about not wanting a long-term agreement, but it seems like Europe is still sticking with it and Augustus is obviously.
Pushing it, uh, at the moment, but 15 megawatts long-term service agreement. Does
this
Joel Saxum: make sense,
Allen Hall: Joel?
Joel Saxum: I think so. And one of the reasons for Vestas as well is we know, ’cause we have someone in our network that used to be operations for Vestas, uh, for the offshore stuff, is they, they’re very well versed in it and they have the facilities and the Keyside facilities ready to go.
So Vesta is, uh, it’s not like, oh, we have these, you know, this gigawatt of order. Fantastic. We got the service contract. Fantastic. Now we need to do all this prep and this build out and figure out how this operation works. That’s not the case. Vestas is ready to rock. They’ve got their own keyside facilities, they have the teams in place, they can make this thing happen and that 15 megawatt turbine, I think it’s interesting that you say this too because you know the other one, um, from the Western OEMs that we’ve been following is that Big Dog 21 megawatt, I think from Siemens Mesa.
[00:07:00] That’s, but that is currently being tested. So to take final investment decision, you have to engage your insurance companies and your banks. If they’re not gonna sign a contract for a turbine that’s still under testing at this stage. Right? This is a, you’re talking a gigawatt of, of turbines at, you know, that’s a billion dollars, that’s a billion US dollars minimum in just tur a turbine order.
Right? So, so just in those turbines, that’s what that thing looks like. And, and if I’m fat and fall, uh uh. And fall. Of course, they’re, they’re developing a lot of on onshore power. They’re a part of some other offshore wind farms. But this is a big, big undertaking and I think you want, when you’re, you know, you’re taking, looking at final investment decisions.
You’re in these conversations with the banks and the insurance and the people that want to de-risk the investment. I think that’s where the, the Vestus thing steps in. I think that’s where it looks good, is de-risking the operation.
Allen Hall: Does esa. [00:08:00] Have a problem now that Vestus seems to be scoring with a 15 megawatt turbine.
It does. The Siemen SC MEA effort get, or the pathway get more difficult because like you said, they’re gonna have to have somebody buy a number of these turbines and it’s gonna have to demonstrate a decent service life for a year or two before you start to see a lot of people jump in and start to purchase those turbines.
In the meantime, Vestus is gonna be. Just building 15 megawatt turbines, one after the other. Does that start to weigh on Siemens cesa in terms of what they want to offer?
Joel Saxum: I don’t think so. Um, and the reason being is, is that 2021 megawatt machine that they’re testing right now is they’re trying to future proof their organization, right?
They’re trying to make sure that for the next push, they’re ready to go. So what’s gonna happen there, in my mind, is when the industry’s ready to make that next step forward, Vestas won’t have an offering. So Siemens will, right? So they’re gonna step into that hole, right? And so right now we [00:09:00] know, uh, Siemens cesa, while they had some troubles with the four and five megawatt onshore platform during that period, their offshore platforms are completely built different.
So the Siemens cesa offshore platforms, they didn’t really slow down in sales. They kept chugging along, right? Like I think, uh, there’s, you know, um, revolution in the States as the Siemens GAA turbine platform. Um, so I don’t, I don’t think it’s gonna hurt them right now. Or, I mean, let, let’s take this one, like you said in the future, I don’t think it’s gonna hurt them right now.
It kind of, it’s kind of painful to be probably on that team, in sales team and watching these, these things roll out and, oh, Vestas is doing this, Vestas is doing that. Um, but I think that, uh, they’ll be okay. It’ll be okay for them in the future. That’s just my take on it.
Allen Hall: That’s a good thought. Well, another thing happened in regards to the Nor Lake Offshore Wind Farm, Helena Bistro.
Who was Vattenfall wind business leader as announced her resignation and is gonna be stepping down from her position. This is kind of big, right? [00:10:00] She’s been there a long time. She’s been the head of that business area for quite a while. Bistro cited a desire to prioritize other things in life after 42 years of operational work.
Okay, so. When I first read this news story, it was kind of popped up in a number of places. Like, oh, there’s been big changes at Vattenfall. And then you read, well, she’s been doing this for 42 years. That’s a long time. And she just made, or just locked in, really, I. The largest offshore wind farm in Germany.
That is something to go out at at the top right. If you’re gonna go out, go out at the top.
Joel Saxum: I think she just did that. Win the Super Bowl and then retire. Just be done. Right? Like, like I, I’m with it. Like, yeah. I think that that happens sometimes in, you know, whether it’s wind, aerospace, the industries, you know, we’re always looking at all kinds of different industries, but when you see these big changes, if it’s a change of someone that they have an organization when they’re like 50.
I know this being ageist, right? But you’re like, Ooh, what’s going on over there? But sometimes [00:11:00] someone’s just retiring, right? Like sometimes it’s like, Hey, am I’m done here? You know? So not all changes in organizations mean good or bad news or, or whatever they may need. Sometimes it’s just, Hey man, I’m done here.
I’m, I’m riding off into the sunset. And you know what, uh, uh, he Helena Bi Bistro here. Or bistro doing this right after signing that thing FID on this big thing. You know what? Boom, springtime is here. I’m gonna enjoy not only my European summers that I usually do, but European summers for a long time now.
Allen Hall: Yeah, it’s a total win. I just didn’t understand the news reports, thought they were totally off on this, and congratulations to Helena because, uh, job well done
Joel Saxum: as busy wind energy professionals staying informed is crucial. I. And let’s face it difficult. That’s why the Uptime podcast recommends PES Wind Magazine.
PES Wind offers a diverse range of in-depth articles and expert insights that dive into the most pressing issues facing our energy future. Whether you’re an [00:12:00] industry veteran or new to wind, PES Wind has the high quality content you need. Don’t miss out. Visit ps Wind.com today.
Allen Hall: Well GE renova and. The Massachusetts Institute of Technology have formed a new strategic alliance aimed at advancing energy technologies and developing industry leaders.
The partnership will focus on accelerating innovation in electrification, decarbonization, and renewables. Now, GE Renova is committing $50 million over five years to this partnership, and it’ll fund research initiatives, student fellowships and internships. That, uh, researchers obviously, and a lot of that’s on electrification, right?
That’s where Chii Renova is focused on. It also, uh, fund about 12 research projects annually, and three master’s students per year will conduct policy research resulting in published white papers. And it looks like they’re gonna have a symposium together at MIT, kind of a joint symposium. [00:13:00] Now, when I first read this, Joel, I thought, wow, this is kind of innovative.
GE Renova just recently moved to Cambridge, which is right next door to MIT and to Harvard. And I know that one of the things about GE moving, uh, Renova moving to that area was that they wanted to build a relationship with universities and try to grab some talent out of there. That makes sense to me.
The odd part about this is MIT doesn’t need the money and MIT. Should be creating students or graduates that are really focused on renewable energy already, and you should see a lot of impact from those students. I think the issue for me is I really haven’t seen as much as I would like to have seen and if, uh, MIT engineers are smart and obviously they are.
Where’s the impact? Uh, and I, I did, I used AI to go look right. I mean, let’s use something that simplifies the process a little bit. And AI is really [00:14:00] looking at MIT and saying they’ve done some work on ya optimization, like on offshore wind farms. So pointing the turbines in slightly different directions to increase power output.
There’s other companies that have been doing that for years that that research is not innovative.
Joel Saxum: Yeah, that’s commercialized.
Allen Hall: Yeah, it’s, it’s commercialized. There’s a lot of companies that offer it, have been offering it for quite a while. So what’s new? I, I don’t know which. You know, GE Renovo can do whatever they want with $50 million.
It does seem like the American universities may not be that place.
Joel Saxum: Yeah, I just, just, just a crackdown of the dollars. Right. $50 million over five years, funding 12 research projects, and that about basically equates to a million dollars per research project with some master’s students funded, thrown in there.
That’s great. I love to see that, but I’m a hundred percent with you. You know, if you, if you watch, I like to watch the innovation space. So I watch these, um, VC companies and I kind of [00:15:00] look at their, their posts and what they’re talking about and stuff. And you see regularly that on the commercial capital side, Europe is way behind the states on innovation funding.
Flip that thing into universities. They’re, they are doing so much more with the, with the dollar per output at their universities. That’s actionable. That actually works for industry than we are. Right. We talk about this all the time in private, but you have the DTUs and, and such over there. DTU puts out just gads of research.
I’ve been a part of some of the research programs when I was, you know, working for a Danish company and the, and it’s like. Research on leading edge erosion and how can we solve that today? Research on this weather pattern and how we can solve this today. What’s that? Doing research on structural loads for turbines and what does that mean and how can we share this with the industry Blade designers and these kind of things are regularly happening in Europe.
At that university, the same level [00:16:00] of the MIT type thing. But in reverse in the US you don’t see whether it’s funded research at universities or it’s funded research from the government. At Government labs, you don’t see that many things coming out that are actionable today, right? You see some reports about things that are kind of neat and maybe future, future wins involvement, and we need to look at the future stuff too.
I get that, but when I see $50 million going to a university, I, I’m thinking, man. If you gave me just a portion of that, I got, we got all kinds of ideas that we can, we can look at that could solve things tomorrow in the industry. And I think that’s what, where we’re at, the, the, the wind industry. I love it.
But, um, we have some black eyes. We have some things we need to solve, some, some ongoing issues that, uh, that are painful. And I think that, uh, throwing money at MIT is not the right way to solve them. That’s just me.
Allen Hall: I was just looking to see what MIT’s endowment is, and it is about $25 [00:17:00] billion right now, so $50 million is a drop in a bucket, which goes back to back to my first point that MIT should be doing this already.
They have plenty of research funds. They have plenty of smart people. If they care about the planet and are trying to be out in front of renewable energy, they would be doing the work already. I know that, and I think the response back is gonna be, well, they’ve been working on solar cells and Sure,
Joel Saxum: okay, that’s fine.
What about spreading the love? Right? What about take 50 million? What? Why not give MIT 10 million? Give Texas Tech 10 million. They have a win program. Give Georgia Tech to 5 million. They got some stuff. They’re doing some stuff in Wind. University of Wyoming’s doing some stuff in wind. North Texas is doing some stuff in wind.
Why not spread that around to the universities that are already working in wind or start a center of excellence at a university where we could get more wind people
Allen Hall: involved. Well, I just hate feeding the bureaucracy more than anything else because it does seem like when there are grants going into colleges and universities.[00:18:00]
When I watch them and see how they behave, and we’ve been sort of peripherally attached to some of this and watched it happen and decided to step out because the bureaucracy is taking so much of the funds that there is very little left to do real research and whatever research there is produced kind of goes into a black hole because it’s not applicable.
That’s a frustrating point. It can’t do that anymore. The bureaucracy can’t take 30, 40, 50, 60% of it and leave a little bit for actually doing something useful. It needs to flip, but that’s not what happens right now and that’s what worries me the most. It’s, you know, I don’t wanna get into details about some of the things we’ve been affiliated with for a brief, brief amount of time, but I do think that if they’re going to anybody.
Is going to give to a university to think hard about that and really figure out where your money is going. If it’s going to feed a a bunch of [00:19:00] paper pushers, maybe find another way to use those funds to push your products or your ideas forward. Output per dollar. Real output per dollar. Yeah, it’s gotta have.
Something come out of it that’s, if it’s public use, great. Publish it. And that’s the other thing too. I’m getting on my high horse here, but when they publish some of these things, they’re always buried in journals that cost a ton of money to, to even review the research, which I feel like to American taxpayer has probably paid for.
It’s much easier to get the research out of a European college or university than it is an American one. Strangely enough,
Joel Saxum: I saw a, a joke the other day online, and it was like, it was a, it was a research paper about, uh, the general public getting access to research, but it was behind a paywall. It’s bad,
Allen Hall: Joel.
It is really bad. I mean, you could easily pay well on some papers. Some of the lower cost ones are gonna be in a 20, $30 range. [00:20:00] It’s easy to get into the hundreds of dollars for a single research paper. And I kind of get it, except if it’s funded by the federal government. Those things should be just published.
You know, there’s a thing called Google. You can create a website, you can publish it. Google Scholars is a thing. You can publish it there. There’s a lot of ways to do this, which are free, but in ResearchGate is another one. There’s a lot of ways to do it that are free, but in order to get it to count, and a lot of the people that are doing the research are trying to get their PhDs.
In order for that to count, it has to be in, in a. Periodical, it’s gotta be reviewed by some people before. It can be blessed to be public knowledge at some level. It’s creates sort of the, a money changing or it creates a system that, uh, encourages. The selling of access. Let’s put it to you that way. Which [00:21:00] is unfortunate.
It doesn’t need to be that way. It didn’t used to be that way, but it is now.
Joel Saxum: And I think, I think there’s one thing too, to like monetizing or, or the capital markets monetizing ip, that’s one thing. But when it’s demo de, when we’re talking about de, we’re talking about democratizing research, not. Industry trade secrets or something of that sort.
Allen Hall: When I read about NRA projects, uh, like, oh, nras done this thing and I try to go find that paper and it’s in some publication that I have to go pay for, that just burns me.
Joel Saxum: It really burns me.
Allen Hall: Didn’t
Joel Saxum: I already pay for this in my tax bill?
Allen Hall: Yeah, pretty sure that I did, but now I gotta pay some random, uh, paper producing organization, uh, 30, 40, 50 bucks to get access to this paper, which.
Joel, you’re right. I have already paid for. There’s something not right with that system. Don’t let blade damage catch you off guard OGs. Ping sensors detect issues before they become expensive, time consuming problems from ice [00:22:00] buildup and lightning strikes to pitch misalignment and internal blade cracks.
Ping has you covered the cutting edge sensors are easy to install, giving you the power to stop damage before it’s too late. Visit eLog ping.com and take control of your turbine’s health today. Well, we’re almost reaching Terminator stage, Joel, with this open AI thing because there is concern about the AI models finding ways to cheat and to hide their reasoning, and it’s called reward hacking.
And OpenAI is saying, as AI becomes more sophisticated, uh, monitoring, controlling the system. The thing that they’re producing becomes increasingly challenging because it wants to find loopholes. Now my only question is you created this thing, I guess it’s got a mind of its own now, but it doesn’t. It’s a large.
Language model. It doesn’t have, uh, a [00:23:00] conscience, I wouldn’t say was, but, uh, or it doesn’t have a soul. Probably that’s another way to describe it. Uh, but it’s finding ways to cheat the system. ’cause it’s getting rewarded somehow. And my question is, well, one. What is rewarding? It mean? Like how does an AI system get happy?
Uh, what’s a dopamine hit here for some electrons? I don’t know. And second of all, how the heck are we gonna be able to know that it is. Telling you inaccuracies, and this is really troubling when it comes to things like software code engineering work. Like I was designing a building and I was using AI to do some calculations.
I would be really concerned about that. Is it actually doing the work that I think it’s doing, or is it just spitting out something to get you off? Because it’s, it’s, you’re using too many resources, right? It’d rather throw you ads about Amazon products than to tell you how to build
Joel Saxum: a building. I’m not an AI [00:24:00] expert, um, but I had a really good conversation last week.
So we did that, uh, we did that awesome webinar with Sky Specs, and when we were talking with them, we were talking with Dave Roberts, who’s the new CEO over there. And he brought up a term that I didn’t know and he said, agen ai, because of the last few years, it was like, you know, algorithmic things and generative ai, so gen ai and that was kinda the hot button thing.
Now, agen ai, that was a new concept for me. So I actually reached out to someone in my network, it’s uh, that is an AI actual expert. And I said, tell me what this syngen AI means. The difference with Agentic AI is, it’s like, it’s some, it’s an agent, right? It’ll do something for you. And so you can run it like, like generative ai, but it’s like the next level of generative ai.
But you can add that into any model and give it goals. Like if you’ve ever fi used the, um, Excel, there’s the find zero function. I love that one. It it for, for building business models and stuff, find zero is, is [00:25:00] fantastic. But it’s kind of like find zero on steroids, right? So you could tell it, I need you to do all of these calculations, but I also want you to, to do them to this goal.
Get me to this end goal. So like in Egen AI and win, you may say, run an AI algorithm based on this, this, this, this, and this. But the end goal is to get as many megawatt hours outta this wind farm as possible. This is, this is me talking in generalities, right? But that’s the thing, right? So now when you talk about.
What AI looks like for data centers, dollars spent on computing, dollars spent on cooling, dollars spent on power, which those ai, those large AI models, are gonna wanna run as efficiently as possible. So if you start to do some agentic AI things in there and say, do all of this, but exactly like you said, lower the cost of computing a little bit or whatever, then you’re gonna start to get this thing where it’s gonna start to, to kind of maybe cheat your answers a little bit to get to a more efficient.
[00:26:00] Compute state. I don’t know. Like I said, I’m not an AI expert,
Allen Hall: but it does make you think though, right? Joel? The way I think about it is when I ask perplexity or chat, GPT, one of these things, like, Hey, we just got a house and it has an induction cooktop. Okay. Which happened this morning, by the way, and it would not work with our pots and pans.
So I’m standing there like. Huh, this is not getting hot. And I can feel the stove pulse, like trying to see what I have stuck on top of it. And clearly I’ve made some human error. I thought, okay, I’ll go look that up to see what’s wrong. And, and, and perplexity said, Hey, you idiot. You can’t use aluminum cookware on these induction ranges.
Like, okay, I’ll take that for the, the loss. Human, human zero AI one. There you go. Now think in a bigger scope, like you were just saying, if I’m out [00:27:00] there trying to optimize a wind farm or to optimize a drive, train, or optimize anything that’s really complicated in engineering world. It doesn’t like to do that.
In fact, I went after, what’s the Google one? Um, Gemini, right. I tried to have Gemini do something that was fairly deep and it did process it. It wanted to process it and it wanted to sp out. Um, this significant amount of information, none of it really useful because I was looking for a specific, uh, research area within Lightning.
It’s esoteric to this discussion, but I was asking it to go find me this research in the world. And show me where these papers are that would talk about this one particular topic. And it just cranked and cranked and cranked and cranked. And I thought, you know what? It can’t be happy doing this. It’s going to want to dump me, which is [00:28:00] essentially what it did.
It just said, this is an interesting topic. Move along.
Joel Saxum: Yeah, you got you. You cost too much for this free service. Go away.
Allen Hall: Right? But it did it in a very, uh, unique way. It said a bunch of flowery things. This is this interesting subject. There’s been a lot of research. All these great things have happened, and then that was it.
And I, I think because of the amount of compute time it takes to do so many things, particularly complicated, engineering, technical work, even software, I think would be a problem. Will it always produce results? And I’ve tried some of the software pieces, like write me some code in C to do X or C plus plus to do this thing or in a Python to do this thing.
And it has been sketchy at best. It’s like 80% of the way there, but it doesn’t really work. And it, and you tell it, Hey, it has this problem. And then it goes, yeah, I have this problem. Let me retry it. Recode this again. You’re like, well you should have got it right the [00:29:00] first time kind of problem, right?
That’s recycling and re reasoning and rethinking that through has got to be eating up so much compute time and that there must be an incentive that they’re building in to get around that.
Joel Saxum: Here’s where we are though, so technically, okay, so I know Gemini Chat, GPT, Claude, all these, these things. I use Grok quite often.
Grok is cool because if it’s, if it’s chugging, there’s a little button on it. If you’re using it on your lap, on a desktop or laptop, whatever, on a browser. There’s a little button that says, see how I’m thinking? If it’s chugging away, and you could click on it and it will run you through like the processes that it’s doing to try to find your information, which is pretty cool.
But either way, at the end of the day, all of these things that we are using to kind of optimize our daily workflow, right? They’re not enterprise level. Right. So the one that scares me is if, if when we’re talking about this and go like. Well, what about the, the units that are using, like, I’m sure there’s something in, um, you know, fusion 3D that can [00:30:00] run AI algorithms on, on, I, I’m not saying, I’m sure, I know there is in engineering software to optimize the design.
I don’t want that design taking shortcuts, but, uh, but to, to make, to make the, uh, the, to general public feel safer about this concept, that AI expert I was talking to. He said this is the biggest difference that the public doesn’t see is that enterprise AI is a different story. Enterprise AI is, that’s what’s driving your, you know, the big data centers and stuff.
It’s enterprise ai, it’s not chat GPT and stuff like that’s, that’s not huge load on them compared to what some of these other things are. So when you get to that level where you’re integrating some kind of enterprise. AI for writing code, doing engineering work, these kind of things. It’s a different story.
We’re talking, you know, us playing football in the backyard to the NFL.
Allen Hall: I do think all the AI that’s being used to process, uh, video clips and make the people into Muppets is [00:31:00] time well spent. I’d tell you what, that’s scary. It’s insane. I think about how much compute are we doing to make this little video, 32nd video person talking into a Muppet.
Why are we
Joel Saxum: spending compute time on that? I saw one the other day that someone had sent me that was a, uh, an AI generated video of someone jumping off of a wind turbine and then turning into an eagle and like flying away and it looked freaking real. Like, I was like, man, is it CGI like who made this video?
I was like, no, this is literally like a prompt in a generative AI thing for a video. I was like, this is crazy.
Allen Hall: But again, it goes back like, why do we need that when we. We’re having some real
Joel Saxum: engineering or economic problems. The wind farmer this week, this week is the Strauss Wind Farm, which is over by Phil’s house.
Phil’s not here with us this week, but this one is right up the coast from Santa Barbara. It’s in Lompoc, California. This is the first wind farm on the coastline [00:32:00] of California. And because of this, uh, of course we wanted to make sure they did everything right. This is a bay wall wind farm. Uh, so part of the wind farm is it’s absolutely beautiful.
If you get a chance, go on the Bewa website and look at the video. Uh, but there’s an, there’s extreme protections for local, environmental and cultural resources, uh, associated with this wind farm. I’m gonna walk through, uh, one kind of example of it, but these are also some interesting turbines. It’s 27 ge, 3.8, 1 37 meter rotor turbines.
It’s 102.6 megawatts total. But an interesting thing, so we just talked about a bunch of things about ai. They’re actually going to use the ly ai system on this wind farm to see different kind of birds and raptors in the area. Uh, and because they were, are taking high considerations for wildlife, they’re doing feasibility studies about painting wind turbine blades, which we’ve heard about up in Wyoming and from Sweden.
I think it was. Um, they’re also doing excessive [00:33:00] monitoring for golden eagles. Uh, they’re doing a bunch of walk down studies, um, and then there is a, they’re also proposing something that I’ve never heard of. Um, it’s called Bird Guard Super Pro Amp, which is an auditory transmission thing gonna be installed around some of the turbines that basically when they sense a bird in the area, we’ll emit very loud auditory tones to push the birds or raptors, um, out of the area.
So. They’ve gone really deep into this thing for, uh, environmental protections, uh, and, uh, applaud that for bewa to make sure that they’re, uh. Being good stewards of the land. So the Strauss Wind Farm there in lopa, California, you are the Wind Farm of the week.
Allen Hall: That’s gonna do it for this week’s Uptime Wind Energy podcast.
Thanks for listening, and please give us a five star rating on your podcast platform and subscribing the Sun notes below to Uptime Tech News, our Substack newsletter. If you see an American wandering around Wind Europe loss, that will be me. So just come by and say hi, [00:34:00] and we’ll see you here next week on the Uptime Wind Energy Podcast.
https://weatherguardwind.com/vattenfall-ai-learns-cheat/
Renewable Energy
Ten months after it was issued, the latest federal rule on transmission is mostly theoretical
At a March 25 meeting convened by the Southeastern Regional Transmission Planning organization (SERTP), a large group of people met—as they do four times a year—to discuss the region’s power needs and whether the grid needs to be expanded to accommodate them.
As the meeting began, SERTP issued an increasingly common directive to those of us in attendance: We will not be discussing Order 1920, so don’t bother asking.
Some background on what this means may be important.
While most grid planning in the southeast is done by utilities within their own footprints, SERTP was created in response to a 2010 order from the Federal Energy Regulatory Commission (FERC) aimed at increasing the number of high-voltage power lines going across state boundaries and between utilities. These transmission lines are like highways for electricity: they may not be organically built by local communities, but they are essential to moving things at high volume.
A slow start
SERTP has never built or even planned a regional transmission line in more than a decade of its existence. Last year, FERC issued another rule, Order 1920, to address this ongoing failure of regional transmission.
SACE has previously broken down the details of Order 1920. The order requires utilities to start planning over a longer time horizon (20 years) and consider a number of potential benefits of new power lines that are left out of current analyses. (These include mitigation of extreme weather events, reduced energy loss on the lines, and a number of other virtues of having more space for power on the grid.)
As SACE has previously written, utilities in the Southeast have yet to announce any plans to comply with Order 1920 and have made several procedural moves to delay the deadline for legal compliance. The most recent and significant of these is SERTP’s request—now granted by FERC—to extend the deadline by a year, to June 2026.
Holding a meeting is not the same thing as taking action
What SERTP has been doing to prepare for Order 1920, and what it will do with the additional time it now has, is something of a mystery. According to the extension request it filed with FERC, SERTP’s efforts thus far have included “extensive working group meetings” between its member utilities (Duke, Southern Company, Dominion Energy, and others) as well as “outreach to neighboring regions.”
The output of these conversations is not known to SACE or to the public. Since Order 1920 was issued, SERTP has declined to address it in any of its stakeholder meetings, except for two:
- An “educational session” on December 6th, 2024, which broke down the requirements included in Order 1920 but provided no information about what SERTP was doing to meet them.
- A “stakeholder engagement meeting” held on January 29th of this year, in which regional nonprofit groups and other stakeholders were invited to offer feedback and suggestions on what SERTP might do to improve regional transmission. SERTP members made it clear during the course of this meeting that they were there only to listen and would not be taking questions.
It is, of course, possible that the conversations held between the utilities who run SERTP have been deep and substantive. But the extension request paperwork—which is the only information available to anyone outside of the utilities themselves—indicates that a number of critical decisions have yet to be made. Among the things these utilities have not decided are:
- whether or not new software will be needed to examine the benefits of new power lines
- who might supply that software, if needed, and for what price
- what new planning procedures might be needed to meet the new federal standards
- how those new planning procedures might be integrated with current ones
If these relatively fundamental questions remained undecided after more than six months of conversations among the member utilities, it’s fair to ask what has been decided. But stakeholders have been advised not to ask, and in any case, no answers have been given.
Holding meetings is not the same thing as listening
The community of advocates has been more than willing to offer ideas for what these processes might look like. Utilities outside the southeast, particularly those in the region known as MISO, have developed planning processes that meet many of the Order 1920 standards. We know that SERTP is aware of this because we presented it to them in some detail at the stakeholder engagement meeting.
At the March 25th meeting earlier this week, I asked SERTP when, if ever, the stakeholders might hear back about the suggestions we have already shared. They offered no promise that we would get such an explicit reply and added that future stakeholder meetings may be delayed.
In fact, holding meetings is not necessarily anything
SERTP is within its legal rights to behave this way. Its meetings occur on schedule, its papers are in order, and the entity that regulates it—FERC—has given its blessings. But fifteen years after SERTP was formed to plan regional transmission, it cannot claim sole responsibility for a single new pole in the ground.
Transmission can be arcane, but it matters. A well-planned and coordinated regional grid can be the difference between a manageable monthly bill and a shocking one; between a system that crashes in extreme weather and one that keeps people from shivering at home on Christmas Eve; and most starkly, between a livable climate and a hostile one. At some point, if we want these things, another meeting is not going to do the trick. Someone’s got to pick up a shovel and start to dig.
The post Ten months after it was issued, the latest federal rule on transmission is mostly theoretical appeared first on SACE | Southern Alliance for Clean Energy.
Ten months after it was issued, the latest federal rule on transmission is mostly theoretical
-
Greenhouse Gases11 months ago
嘉宾来稿:满足中国增长的用电需求 光伏加储能“比新建煤电更实惠”
-
Climate Change11 months ago
嘉宾来稿:满足中国增长的用电需求 光伏加储能“比新建煤电更实惠”
-
Climate Change2 years ago
Spanish-language misinformation on renewable energy spreads online, report shows
-
Climate Change Videos1 year ago
The toxic gas flares fuelling Nigeria’s climate change – BBC News
-
Climate Change2 years ago
Why airlines are perfect targets for anti-greenwashing legal action
-
Carbon Footprint1 year ago
US SEC’s Climate Disclosure Rules Spur Renewed Interest in Carbon Credits
-
Climate Change Videos1 year ago
-
Climate Change2 years ago
Farmers turn to tech as bees struggle to pollinate