Tuesday, March 26, 2024

Consolidated accounts can act as the AGI model.



In the newest models, the AGI (Artificial General Intelligence) is the center of the AI. Even in the most futuristic models, it's the link between lower-level AI:s and the ASI (Artificial superintelligence) and singularity and transcendence AI systems that use AGI to create lower-level AI:s. In some versions, the self-awaring AI, and AGI are separated from each other. But in automatic systems, those self-aware systems and AGI are synbiotic entirety. The domains are pre-made building blocks that make it possible. That the AI can have unlimited expandability. It learns by connecting new databases or domains in it. 

If we think that the consolidated account systems are the models for the AGI. The next step below the AGI to the executing level is reasoning machines. Those machines are the main domains that are made for certain situations. The reasoning machine is a domain that involves orders and authority to respond to actions like conflicts. The subdomains or domain-specific expertise machines are controlling tools that can respond to the crisis. The domain can be helicopters, ground vehicles, etc. 

The Self-aware machine notices the threat or something that the system must react to. And then give orders to the AGI to create the lower-level AI or give orders to lower-level and limited AIs that control the more specific and smaller parts of the system. In this hierarchy, the singularity and ASI (Artificial superintelligence) are top of the system. The AGI involves language models, that the system requires for making the algorithms that can respond to the system. 


Neural network-based solutions are the ultimate tools. 




(The image above) The AGI (Artificial General Intelligence) data structure can look like the consolidated account. Each account is a specific, independent domain that controls the subdomains. 

When the focus turns smaller the accuracy rises. And the system requires another type of code. At the final level, the system must control the robots or interface that controls individual computers or other devices. That is connected to the domain. 

The domain-specific network architecture means that when data comes from the below to upper level, the domain makes an analysis and tries to solve the problem alone. When there is no answer at a certain time the domain calls help. The benefit of neural network-based solutions is that the system can share its resources to multiple places. And it can transfer operations between networked domains. 

If a domain cannot respond to the situation, it calls for more force from another domain. In the cases that the request crosses the domain limits. The upper-class domain accepts or denies access. That means that if there is a problem, the problem will not escalate to the entire system. And that leaves the resources for other types of actions. 



Stage 1 – Rule-Based Systems 


Stage 2 – Context Awareness and Retention 


Stage 3 – Domain-Specific Expertise 


Stage 4 – Reasoning Machines


Stage 5 – Self-Aware Systems / Artificial General Intelligence (AGI)


Stage 6 – Artificial SuperIntelligence (ASI) 


Stage 7 – Singularity and Transcendence


Source: (Technology magazine, The evolution of AI: Seven stages leading to a smarter world)


The domain-based systems are tools that involve language models for control of the systems in the domain. The system can look like the consolidated account structure. The AGI is the mainframe for the sublevels. Those sublevels involve more limited but accurate algorithms. That's why the AGI. Its subsystems are like consolidated account structures. 

The interaction is the tool that makes the AGI so powerful. The system involves multiple different data structures. That working under sorted domains. We can say that this type of AI can involve AGIs:s that are becoming more accurate but limited. When AGI gets some orders, it will transfer those orders to a domain responsible for that kind of situation. 

In that kind of system, the code is easier to control. The AGI can use pre-made domains for making responses for the orders. And the interaction means that if the AGI is damaged the subdomains will fix it again. When the AGI commands things like robots through the domain-specific systems it also collects information on how effective the response was. And that helps the system develop itself.  


https://technologymagazine.com/ai-and-machine-learning/evolution-ai-seven-stages-leading-smarter-world


Image 1) 

Created by AI

Image 2) 

https://docs.aws.amazon.com/architecture-diagrams/latest/modern-data-analytics-on-aws/modern-data-analytics-on-aws.html


Saturday, March 9, 2024

Humans should be at the center of AI development.


"Experts advocate for human-centered AI, urging the design of technology that supports and enriches human life, rather than forcing humans to adapt to it. A new book featuring fifty experts from over twelve countries and disciplines explores practical ways to implement human-centered AI, addressing risks and proposing solutions across various contexts." (ScitechDaily, 50 Global Experts Warn: We Must Stop Technology-Driven AI)


The AI is the ultimate tool for handling things that behave is predictable. Things like planets' orbiting and other mechanical things that follow certain natural laws are easy things for the AI. The AI might feel human, it can have a certain accent. And that thing is not very hard to program. 

It just requires the accent wordbook, and then AI can transform grammatically following text into text with a certain accent. Then the AI drives that data to the speech synthesizer. The accent mode follows the same rules as language translation programs. The accent wordbook is needed to translate spoken commands grammatically so that the system understands them. 

The problem with spoken commands is that most people don't speak grammatically right in their normal life. Computers require that the user gives commands precisely the right way. That means the user must use precise grammatic.  In AI-based language models the system might have multiple possibilities for how the user can give commands. But the user must select one of those choices. And even if AI reacts to accents the user must use an accent that is on the list. 

Users create most AI problems themselves. If we want AI to predict our death moment, we get what we want.  When we search only negative things like death and violence the screen is full of those things. So is this kind of negative answer the user's or AI's fault?

People expect too much from them. They want the AI to make a thesis for them. And those kinds of things cause ethical problems. In those cases, the AI is like a ghostwriter. And the use of them is strictly prohibited. Giving another's text as own is called plagiarism. 

The AI is an excellent tool to collect sources for the thesis, but the person should know about topics that the AI is a useful tool. And they should write their texts themselves. Or there could be lots of topical errors. 

AI is a good tool for computer programming. But it's not so good when it must act as a therapist. If people want to get bad answers from AI, and the use of it turns into masochism, that is their problem. If people want to generate some nazi-soldier images using AI, we must realize that there are lots of authentic nazi-soldier images on the net. So why the AI should make that kind of image?

Those things are problematic. The AI  can create many things like cyborg chickens, that is over this text. But why should it make some historical characters? I once tried to make AI to generate the photorealistic version of Vincent van Gogh's "Starry Night" painting. And the AI refused to make that thing. There would be no problem with AI art if there is mention that it's made using AI. 

AI plagiarism is also easy to deny. The AI must just keep records or databases about stuff that it created. And there the clients like social media channels can check if the text or image is created using AI. And then there could come to mention that "made by A.I". This kind of plagiarism detection is already in use in high schools. Or it has been used for over 15 years. So can this type of plagiarism detection used in AI? 

https://scitechdaily.com/50-global-experts-warn-we-must-stop-technology-driven-ai/

Friday, March 8, 2024

Metamaterials can change their properties in an electric- or electro-optical field.

 

"Researchers have created a novel metamaterial that can dynamically tune its shape and properties in real-time, offering unprecedented adaptability for applications in robotics and smart materials. This development bridges the gap between current materials and the adaptability seen in nature, paving the way for the future of adaptive technologies. Credit: UNIST" (ScitechDaily, Metamaterial Magic: Scientists Develop New Material That Can Dynamically Tune Its Shape and Mechanical Properties in Real-Time)

Metamaterials can change their properties in an electric- or electro-optical field.  An electro-optical activator can also be an IR state, which means. The metamorphosis in the material can thermally activate. 

AI is the ultimate tool for metamaterial research. Metamaterials are nanotechnical- or quantum technical tools that can change their properties, like reflection or state from solid to liquid when the electric or optical effect hits that material. The metamaterial can crumple when electric or optical stress impacts its atoms. The temperature can also change the state of the material. 


"The team has developed a world-leading MWP chip capable of performing ultrafast analog electronic signal processing and computation using optics. Credit: City University of Hong Kong." (ScitechDaily, 1,000x Faster: Ultrafast Photonics Chip Reshapes Signal Processing)

And that thing can make it possible to make new stealth materials or robots that are like droplets when they travel to a target. And then those robots can get solid state. They can used in many ways. Those robots can close blood vessels that transport blood to tumors. Or they can close leaks in oil or gas tubes. 

The new materials can used to cover qubits in new quantum computers. Portable quantum computers require solid-state qubits that can operate at room temperature. This kind of system requires the ultimate AI that can control the qubit and outside effects. The ability to react to outside effects like changes in radiation level requires. The material can start counter-actions right when the system notices changes in radiation stress. 


"Scientists at the DOE’s Brookhaven National Laboratory have discovered that coating tantalum with magnesium significantly enhances its properties as a superconducting material for quantum computing. This coating prevents oxidation, increases purity, and improves the superconducting transition temperature of tantalum, offering promising advancements for the development of qubits and the future of quantum computing." (ScitechDaily, Breaking Barriers in Quantum Research: Magnesium-Coated Tantalum Unveiled)


Quantum computers can operate remotely from deep underground shelters. Those systems use the internet to communicate with their users. 


The optical neural network can revolutionize the AI. The optical neural network doesn't raise the temperature in the system. The optical neural network can also operate to control with superconducting quantum computers. And that thing makes those systems interesting. The great thing about optical neural networks is that the system can change its mode between binary and quantum modes. 

In that case, the system can have thousands of optical processors, that can turn into a virtual quantum computer. Or maybe, they can create superpositions and entanglements between standing photons in those microchips. However, the optical neural network can be a more powerful tool than nobody predicted. 

"Recent research has made significant strides in the development of optical neural networks, presenting a sustainable alternative to the energy and resource-intensive models currently in use. By leveraging light propagation through multimode fibers and a minimal number of programmable parameters, researchers have achieved comparable accuracy to traditional digital systems with significantly reduced memory and energy requirements. This innovative approach offers a promising pathway toward energy-efficient and highly efficient artificial intelligence hardware solutions." (ScitechDaily, Not Science Fiction: How Optical Neural Networks Are Revolutionizing AI)


The optical neural network can act as the sensor itself. If something closes to the metamaterial layer, that neural network sees energy change in its structures. Laser systems like laser rays, that travel in glass fiber can act as a synthetic sense of touch. When something touches glass fiber that is between certain microchips the system sees that the trajectory of that laser ray changes. 

Those microchips can send the information about that thing to the CPU (Central Processing Unit), which decides what the system must do. And the CPU can also be the neural network of microchips. 

When something touches that optical network it sends information about that touch to the microchips that control the metamaterial and its properties. Because those microchips operate with that metamaterial are sensors themselves, they can react very fast. When the physical environment changes, that neural network changes electric or physical conditions in the metamaterial. 


https://scitechdaily.com/1000x-faster-ultrafast-photonics-chip-reshapes-signal-processing/


https://scitechdaily.com/breaking-barriers-in-quantum-research-magnesium-coated-tantalum-unveiled/


https://scitechdaily.com/metamaterial-magic-scientists-develop-new-material-that-can-dynamically-tune-its-shape-and-mechanical-properties-in-real-time/


https://scitechdaily.com/not-science-fiction-how-optical-neural-networks-are-revolutionizing-ai/


Wednesday, March 6, 2024

Quantum breakthrough: stable quantum entanglement at room temperature.


"Researchers have achieved quantum coherence at room temperature by embedding a light-absorbing chromophore within a metal-organic framework. This breakthrough, facilitating the maintenance of a quantum system’s state without external interference, marks a significant advancement for quantum computing and sensing technologies". (ScitechDaily, Quantum Computing Breakthrough: Stable Qubits at Room Temperature)

Japanese researchers created stable quantum entanglement at room temperature. The system used a light-absorbing chromophore along with a metal-organic framework. This thing is a great breakthrough in quantum technology. The room-temperature quantum computers are the new things, that make the next revolution in quantum computing. This technology may come to markets sooner than we even think. The quantum computer is the tool, that requires advanced operating- and support systems. 

When the support system sees that the quantum entanglement starts to reach energy stability. It must start to create another quantum entanglement and transport data out from the first entanglement. Another way to make the quantum entanglement stable. Is to pump energy out from the receiving part. 

Information is like the plague on the qubit. And quantum entanglement is like a wire that transports that plague to another qubit. So, qubits are particles in both ends of quantum entanglement. The system makes superposition and entanglement between those particles. 

In that process, the system adjusts those particle's oscillations into the same frequency, and if another particle's energy level is lower, that makes the energy and information flow to the lower energy particle. Stable quantum entanglement is required. That the system can keep receiving part of the quantum entanglement at a lower energy level. 



"The team has developed a world-leading MWP (Microwave Photonics) chip capable of performing ultrafast analog electronic signal processing and computation using optics. Credit: City University of Hong Kong" (ScitechDaily, 1,000x Faster: Ultrafast Photonics Chip Reshapes Signal Processing)

When both sides of the qubit are at the same level, a standing wave between them breaks quantum entanglement. The support system can send a side-coming laser beam to the receiving qubit. That makes it transport energy into the lower energy particles. Then the system must create a qubit to the opposite side. Those qubits can form a morphing network in the system

The AI-controlled systems require microchips that can operate at very high speed. The photonic microchips are 1000X faster than regular microchips. The photonic microchips do not form magnetic fields around them. And that system can operate with quantum computers. The AI-based operating systems control photonic microchips whose mission is to turn binary data into qubits. And that thing makes photonic microchips more powerful than ever before. 


AI-based kernels and operating systems increase those systems' power. 


In the structure, the photonic microchips can also act as morphing tools. It's possible. That the morphing microprocessor can change its state between quantum computers. And binary computer. That makes this kind of system flexible and powerful. The new metamaterials make it possible to create switches and logical gates that make this system more effective. 

When researchers want to make extremely fast binary microprocessors. They can use three lines. Two lines transport data. The operating system system interprets data that travels in line 1 as zero (0). Data that travels in line 2 is interpreted as one (1). There is also the third line, that tells that the power is on in the system. This kind of system can have three layers. 


1) Regular binary layer that runs AI-based operating system. 

2) Photonic processor layer. 

3) Quantum layer.


The regular binary layer controls the AI-based kernel. The photonic microchips can be even faster than nobody believed. The system may give numbers for every data impulse that it sends through photonic microchips. In that system, the bit has a recognition part, but that requires the AI-based operating system and AI-based kernel. 

In that system, the transmitter sends number 1 (or 3,5,7...). That tells the operating system that the data bit comes from line one. And when the system sends 2 (or 4,6,8..) That thing is interpreted as line 2. The regular binary system is the thing that controls the photonic system. 

That helps the AI-based operating system connect those bits in the right order if there is a malfunction in those lasers. The wire 1 can give odd numbers to the bit. And line can give an even number for the bit. That ability to number those bits makes it possible to transport data in one line. 


https://scitechdaily.com/1000x-faster-ultrafast-photonics-chip-reshapes-signal-processing/


https://scitechdaily.com/metamaterial-magic-scientists-develop-new-material-that-can-dynamically-tune-its-shape-and-mechanical-properties-in-real-time/



https://scitechdaily.com/quantum-computing-breakthrough-stable-qubits-at-room-temperature/





Why AI should forget things?


Memory is like an attic. If there is lots of stuff that makes it slow and if the attic is full of stuff, that means the owner must check much more merchandise than in a clean attic. When people are cleaning their attics and carrying unnecessary merchandise away. 

That makes it easier to use that room. People forget things because that makes their memory effective. If there are left only things. that people need that makes the memory effective.  And one reason for removing things from memory is that we don't use them. In this text system, the brain and computers follow the same rule. They forget because their memory is not unlimited. 



Image: Quanta Magazine

Why do we forget? Forgetting things makes our memory more effective because there are not so many memories. Memory is an impressive thing, but even human memory is not unlimited. When the system stores something in memory, it must reserve one memory unit for each memory. And when the system wants to find something in memory, that requires that it must check every memory unit. The memories of computers and the human brain operate the same way. They store memories in a network structure. That memory does not have unlimited capacity. 

So, if we don't use some memories, we should not store them. If the system erases memories that it doesn't use, that releases those memory units for new memories. Memory is like a network of cells that form a network and reconnect those cells in a morphing neural network makes possible to reshape images and memory entireties. 

When artificial intelligence erases some memories, it does the same thing that regular computer users do every spring. It cleans its memory from unnecessary data, which makes it faster. The algorithm determines how often a user or program must open a certain file. If the file is not open, or it has no other connections with used software, that means the system erases that file. That kind of thing makes the AI more effective. 

https://www.quantamagazine.org/how-selective-forgetting-can-help-ai-learn-better-20240228/

Fusion startup plans to clean orbital trajectories using laser rays.



The idea of lasers use as satellite killers is not new. The problem is how to create high-power lasers that can operate at the orbital trajectory. Another problem is how to remove debris from the orbiter. The satellite killers are based on the new ideas to remove space junk from orbiters. 

The same systems that are used to remove space junk can be used in wartime to push enemy satellites to the ocean. The other version is to use EMP impulses to shut down enemy satellites. This system is required for removing EMP or FOBS (Fractional Orbital Bombardment System) weapons from the orbiter. Also, it can destroy low-flying recon satellites pushing them into the atmosphere. There is suspicion that FOBS is a so-called sleeping satellite, equipped with a nuclear warhead. 



In some other models, satellites can use tiny soft ammunition that can push the targeted satellites to the atmosphere without destroying them. In some other versions robot connects small balloons to satellites, and then the laser detonates those balloons to push the satellite out from its trajectory. There could be some marking system in a satellite. That tells if there is a malfunction. And it must removed from space. 


There are plans for the Flying recycle bin or junk collect spacecraft can take those non-operational satellites out of the orbiter. The spacecraft just pulls the targeted satellite in it. And then it returns to the ground. This helps to keep classified military satellites secretive. And the crew can analyze if there are attempts to affect that satellite. 


In the future, there can be service robots at orbiters whose mission is to connect small rockets to satellites that are not operational anymore. Then those small rockets can push satellites to the Pacific Ocean. 

In some visions, the satellite operators must leave a small fuel dose to the satellite, so that they can push them into the atmosphere. Another vision is that satellites can be equipped with heat shields and satellites, and they would return to the ground using parachutes. That allows us to use those systems to analyze the cosmic radiation effect on microchips. And then a normal recycling center can recycle those satellites. 

The idea of this kind of thing is not to destroy satellites. Or this kind of lasers will not put them into pieces. The idea is that a laser system can push a satellite or another space junk into the atmosphere. The high-accurate laser ray can hit a hole in the satellite's fuel tank. 

And leaking fuel pushes satellites into the atmosphere. The laser that makes this operation can be ground-based. The mirror that is above the orbital trajectory can aim the ground-based laser at the target, and then reflect laser ray into the targeted satellite. 


https://www.freethink.com/space/space-junk-fusion


https://thehill.com/opinion/national-security/578797-the-return-of-fobs-china-moves-the-space-arms-race-into-the-nuclear


https://en.wikipedia.org/wiki/Fractional_Orbital_Bombardment_System

Tuesday, March 5, 2024

Photonic Microchips and AI-based operating systems make the most powerful systems in the world.

"DataCebo, an MIT spinoff, leverages generative AI to produce synthetic data, aiding organizations in software testing, patient care improvement, and flight rerouting. Its Synthetic Data Vault, used by thousands, demonstrates the growing significance of synthetic data in ensuring privacy and enhancing data-driven decisions. Credit: SciTechDaily.com" (ScitechDaily, Generative AI: Unlocking the Power of Synthetic Data To Improve Software Testing) 

In software testing, the AI can connect itself with simulators or record data from environments or similar systems. Then the interactive AI can interact with those systems and models. Similar systems can used for physical tools. 

"DataCebo offers a generative software system called the Synthetic Data Vault to help organizations create synthetic data to do things like test software applications and train machine learning models. Credit: Courtesy of DataCebo. Edited by MIT News."  (ScitechDaily, Generative AI: Unlocking the Power of Synthetic Data To Improve Software Testing) 


The new microchips can revolutionize the quantum computer development. Those photonic microchips are systems that can keep a computer's temperature low. It's possible. Photonic microchips can also change their states between binary and quantum modes. Those new microchips require highly advanced operating systems that can control them accurately. 

The microchip itself does nothing. Operating systems and software are also important tools for successful computing.  In AI-based systems, the center of the system is the language model. That model can search for things on the Internet. 

Or it can use limited datasets. The AI-based operating system can activate needed applications. Or if that application is not installed or the system has no access to it. That AI-based system can search that application from the application store. Then the system can ask, will the user wants to use a keyboard or spoken words. Starting to drive commands to that application. The application might involve a database. That tells the AI what kind of commands the application requires. And then the AI can use those commands to control the application. 


"Researchers have developed a groundbreaking light-based processor that enhances the efficiency and scalability of quantum computing and communication. By minimizing light losses, the processor promises significant advancements in secure data transmission and sensing applications. Credit: SciTechDaily.com" (ScitechDaily, Quantum Computing Takes a Giant Leap With Light-Based Processors)


Generative AI can use synthetic datasets to improve research. 


The morphing network. Along with generative AI, morphing networks can collect data from multiple data sources. That means it can collect data from multiple laboratories. And the networked AI can use simulations with highly accurate computing to make simulations of the materials and their behavior in certain environments. When we think about synthetic data, we can use multiple sources to get that data. And we can connect arbitrary numeric and chemical values in that data. This type of dataset can simulate thermal effects in microchips very accurately. 

The new and powerful AI requires an AI-based operating system. That can control microchips and other hardware interaction with software very accurately. That allows the system to shut down part of the microprocessor's cores if its temperature rises too high. 

Image: FreeThink.com


The over-human software developers that are AI-controlled development environments that are the most effective systems in the world can use this type of synthetic data for the R&D process. 

In the case of physical systems like cars or ships, this kind of tool can collect data from different accidents. And things that cause most inspection rejections. Then the system can collect data from other vehicles where those points don't cause denies, and then that system can connect those solutions with new models. In the same way, the AI-based software development tool can search for things like data security- and other vulnerabilities following parameters that are stored in the system. 

The AI-controlled developer can connect with the AI-based operating systems. That can adjust the microchip operations. That thing makes those systems even more effective. However, the AI-based software development system requires an operating system that interacts between software and hardware. 

https://www.freethink.com/robots-ai/ai-software-engineer


https://scitechdaily.com/generative-ai-unlocking-the-power-of-synthetic-data-to-improve-software-testing/


https://scitechdaily.com/quantum-computing-takes-a-giant-leap-with-light-based-processors/

The AI and new upgrades make fusion power closer than ever.

"New research highlights how energetic particles can stabilize plasma in fusion reactors, a key step toward clean, limitless energy. Cr...