Skip to main content

Compositional generalization is the new method for machine learning.

  Compositional generalization is the new method for machine learning.


Compositional generalization means the ability to create new entities using existent particles. The idea is like making music by using the same notes but changing words. In the same way, if the system knows the words "walking" and "two times" it's possible to connect those terms and make the sentence "walking two times". This is the idea of compositional generalization.

The compositional generalization can extend to physical works. The system can connect certain movements with certain objects and orders that users give to robots. But the key element in that kind of robot is language modeling. The robot must "understand" the things that people want it to do when they say something. The compositional generalization means that a robot can connect some words that it hears to certain movement series.


"Researchers have developed a technique called Meta-learning for Compositionality (MLC) that enhances the ability of artificial intelligence systems to make “compositional generalizations.” This ability, which allows humans to relate and combine concepts, has been a debated topic in the AI field for decades. Through a unique learning procedure, MLC showed performance comparable to, and at times surpassing, human capabilities in experiments. This breakthrough suggests that traditional neural networks can indeed be trained to mimic human-like systematic generalization." (ScitechDaily.com/The Future of Machine Learning: A New Breakthrough Technique)



In a physical model, the system can learn by looking at things. When a robot or learning machine learns to walk, it can see how people walk. In this example, a robot looks like a human. Then it must combine things that it sees as the right things. The system must know what limbs are and what parts it must use for moving things, like the left arm.

The servo engines that move joints must be equipped with microchips. The system must know what is on the left side. And then it must know where that servo engine is. When operators teach which side is left, they can raise the left hand of the robot. Then they must make the robot connect that moving hand to the word "left side".

The microchips that are controlling the left side might have small computers that know if it needs to move its servo engine. In that case, the computer "brain" of robots can send commands by using a common address or common transportation address. In that case, the computers that control servo engines filter out unnecessary messages from the command line that the system gives to those servo engines.

Because robots use networked systems where intelligent systems control the joints, another neural structure is simpler to make. And that also makes operations lighter for the system's brain, which is the computer. In networked systems, there are multiple secondary computers in the robot's body. And the robot group can form a whole where each member shares their information and computing capacity.


https://scitechdaily.com/the-future-of-machine-learning-a-new-breakthrough-technique/

Comments

Popular posts from this blog

The new bendable sensor is like straight from the SciFi movies.

"Researchers at Osaka University have developed a groundbreaking flexible optical sensor that works even when crumpled. Using carbon nanotube photodetectors and wireless Bluetooth technology, this sensor enables non-invasive analysis and holds promise for advancements in imaging, wearable technology, and soft robotics. Credit: SciTechDaily.com" (ScitechDaily, From Sci-Fi to Reality: Scientists Develop Unbreakable, Bendable Optical Sensor) The new sensor is like the net eye of bugs. But it's more accurate than any natural net eye. The system is based on flexible polymer film and nanotubes. The nanotubes let light travel through it. And then the film at the bottom of those tubes transforms that light into the image. This ultra-accurate CCD camera can see ultimate details in advanced materials. The new system can see the smallest deviation in the materials.  And that thing makes it possible to improve safety on those layers. The ability to see ultra-small differences on surf

Quantum breakthrough: stable quantum entanglement at room temperature.

"Researchers have achieved quantum coherence at room temperature by embedding a light-absorbing chromophore within a metal-organic framework. This breakthrough, facilitating the maintenance of a quantum system’s state without external interference, marks a significant advancement for quantum computing and sensing technologies". (ScitechDaily, Quantum Computing Breakthrough: Stable Qubits at Room Temperature) Japanese researchers created stable quantum entanglement at room temperature. The system used a light-absorbing chromophore along with a metal-organic framework. This thing is a great breakthrough in quantum technology. The room-temperature quantum computers are the new things, that make the next revolution in quantum computing. This technology may come to markets sooner than we even think. The quantum computer is the tool, that requires advanced operating- and support systems.  When the support system sees that the quantum entanglement starts to reach energy stability. I

Humans should be at the center of AI development.

"Experts advocate for human-centered AI, urging the design of technology that supports and enriches human life, rather than forcing humans to adapt to it. A new book featuring fifty experts from over twelve countries and disciplines explores practical ways to implement human-centered AI, addressing risks and proposing solutions across various contexts." (ScitechDaily, 50 Global Experts Warn: We Must Stop Technology-Driven AI) The AI is the ultimate tool for handling things that behave is predictable. Things like planets' orbiting and other mechanical things that follow certain natural laws are easy things for the AI. The AI might feel human, it can have a certain accent. And that thing is not very hard to program.  It just requires the accent wordbook, and then AI can transform grammatically following text into text with a certain accent. Then the AI drives that data to the speech synthesizer. The accent mode follows the same rules as language translation programs. The ac