Friday, February 23, 2024

Even the most creative AI requires humans.




The AI can make images from music and texts. It can create many things faster than humans. But the AI is not itself creative. It needs that text or music for the image. AI requires humans to make ideas for things that it creates. The AI has many abilities. But it requires special orders to make those things. 

AI is not an initiative. And that means. It cannot work as a compositor or some other creative worker. 

It's possible. That the AI can create something. Like poems or paintings of some topics. When AI makes that thing, it requires topics about what it should do. Without those orders or queries, the AI does nothing. 



Then it follows certain rules and connects texts from different sources. The problem with that model is that the system is powerful and effective. But it has no deep knowledge about things. 

This means the AI cannot realize things that it reads. It can collect data. But in those cases, the AI follows page rank. So the AI follows the URL. And maybe then it looks at the heading and then searches. If that thing matches with the query. Then it collects the texts using certain rules. 



The thing is that deep knowledge about the texts and their involvements requires that the AI can research every single word from the homepages. That requires a huge neural network with multiple connections. This is the reason why the AI gives false answers to some questions. The AI requires a very precise order with good grammar. 

If those orders are not good or they are not understandable. That thing means that the AI makes mistakes. When people create art with AI, they make that thing in stages. When the AI creates photos the user must give orders on how the AI can transform them. The AI is like the paintbrush. It's the tool that can follow orders and turn paintings into photorealistic. But it makes that only if the human gives order about that thing. 


The new DNA toolbox can make everything without CRISPR.


The new DNA toolbox uses bacteria to create and multiply the DNA for genome research and genetic engineering. Researchers can use artificial DNA to fix genetic errors and make new types of cells for things like energy production. The ability to interconnect DNA from different sources over species borders opens a world where only imagination is limited. 

The problem is that the DNA must be done in large numbers so that the system can make enough artificial cells for the DNA transplant. The DNA sequence must transfer into artificial DNA with a very high accuracy. Then that artificial DNA must be injected into the cell, where the original DNA is removed, because that cell must create the artificial DNA. 

The problem with the artificial DNA is how to multiply it. If that problem is solved, the system can create new artificial DNA and artificial species. The AI-based solutions can connect images from different species, and then the system can search the DNA sequences. They are similar to animals that have spots. 



The hypothesis goes like this. Similar genomes are controlling the spots of the leopard and butterflies. And if all animals that have spots have similar sequences in their DNA. That thing can offer a conclusion that similar DNA sequences control all spots in nature. The problem is how to find those sequences from the other DNA sequences. And the AI can answer. AI can make the system possible to find the point in the DNA that controls certain things. 

When the next generation of doctors gives DNA therapy they must just find the right DNA point.  Then doctors cut the DNA. Then the system connects the new sequence to that place.  The problem is that the DNA manipulation must done very accurately.  The DNA molecule is very long. And the system must find the precise in the right place. 

This kind of system can use the artificial DNA as the chemical qubit. The system will load data to the DNA. Then the system can read that DNA from multiple points. The system can be interesting, but maybe slower than the electric qubits. This kind of electrochemical quantum computer can be slow but it is less error-sensitive than the electromagnetic quantum computer. 

The thing, how the AI makes DNA analysis very effective is that the AI can multiply the DNA in PCR (Polymerase Chain Reaction) and deliver that multiplied DNA to the different analysis points. Then the AI can order those systems to begin the DNA analysis at different, individual points of the DNA. The AI acts like a virtual quantum computer. 

When the AI starts to read the DNA in multiple workstations. Each of those systems starts the process at different points. That increases the power of the system. If there are a thousand workstations. And they read the DNA chain in an identical sequence. That leaves 3000 000 base pairs for each workstation. The DNA that the system uses can be separated, but if those bites are identical. The system can use this method where each workstation begins at individual points to make the DNA analysis more powerful than we can ever imagine. 


https://phys.org/news/2024-02-toolbox-genomes-crispr.html


https://techandsciencepost.com/news/biology/new-toolbox-allows-engineering-of-genomes-without-crispr/


The race for new input tools is running fast.




The mouse and mousepads are quite old systems. The new tools like ultra-advanced fighters require the new type of control tools that are more effective than other systems. One of those systems is neural control which uses signals straight from the brain or neural tracks controlling computers and robots. 

Elon Musk says, that the person who got the first neuro-implated microchip can move the mouse on the screen. That is very interesting, but the problem with those Neuraport microchips is that they need a surgical implantation operation. 

Those microchips can used to make the singularity the hybrid system. That merges the internet and the human mind into one entirety. The Neuraport microchip allows people to control any device using that microchip and BlueTooth connection. 

But the thing that limits the use of those neural microchips is the surgical operation. This is the reason for researchers to develop external systems that are like hats. And the person can use them without surgical operations. The need for surgical operation limits the use of those Neuraport chips. 



The BCI systems allow the ultimate possibility to control devices. The BCI also allows to creation of an ultimate alternative reality. 

There is a small problem with the BCI systems. They will stimulate the brain shell. That means the user doesn't separate the difference between the virtual world, and the real world. In some visions, the person who uses BCI for gaming in alternative reality will create so-called matryoshka consciousness. In that vision person can slip into the internal metaverses, and then the person will just forget to drink and eat. That is one of the things that can cause problems. And another problem is that hacking those brain implants can cause a very bad situation. 

The Meta Corporation created an alternative for BCI systems. That thing is the so-called neuro watch or officially neural wristband. The neural wristband follows the impulses that travel in nerve systems. The neurowatch observes neuro impulses that travel in the nervous system when a person moves the wrist or fingers. That allows the person to use gestures for control devices. 

That connected to that wristwatch. Those kinds of systems are easier to control. Then the systems that affect straight with the brain. That kind of controller is suitable to operate the hand-looking manipulators and computers or any other devices that are connected to that system. 


https://www.forbes.com/sites/roberthart/2024/02/20/elon-musk-says-neuralinks-first-brain-chip-patient-can-control-computer-mouse-by-thought/


https://www.uploadvr.com/zuckerberg-neural-wristband-will-ship-in-the-next-few-years/


https://learningmachines9.wordpress.com/2024/02/23/the-race-for-new-input-tools-is-running-fast/

Thursday, February 22, 2024

The robot water striders are the fastest and smallest miniature robots.


"Researchers at Washington State University have developed the smallest and fastest micro-robots, potentially transforming fields from artificial pollination to surgery. Utilizing shape memory alloys for movement, these robots—significantly lighter and quicker than previous models—aspire to achieve greater autonomy and efficiency by mimicking natural insect behavior. Credit: WSU Photo Services." (ScitechDaily, Scientists Have Created the World’s Smallest, Lightest, and Fastest Fully Functional Micro-Robots)


The new miniature robots can walk on the water. Their structure is a straight copy of the water striders. The small bugs that can walk or slide on the water. On the water strider's feet is small hair and their weight is divided on the layer so that it will not fall through the surface membrane. 

This system, that the Washington State University created uses small rows to move the robot. So the robot stands on water. And the rows move it. Another way is to use small-size paddle wheels that can make those robots even faster. In some visions on those robot's feet small cylinders can rotate and push the robot forward. 

The more advanced system can use the small nano-hair that makes it hover above the water layer. And some models can have nano-size paddle wheels where those small nano-hairs can be in rotating platforms. Those rotating paddle wheels can give those robots more speed than any natural water strider can get. 

The smallest and fastest miniature robots in the world are the next-generation tools for scientific and military applications. This type of miniature robot can sniff chemicals, observe their environment, and search for new species. In the same way, those systems can operate in civil rescue and military intelligence missions. 

The bug-size robots are possible because they can harvest energy from radio waves. However, there is the possibility. Some next-generation robots use biological power units like living cells that can create energy for the system. The new types of microchips can use living neurons to boost their abilities. 

And those systems can be the tools that give small robots to very high autonomy. The tiny robots that use biological electric delivery systems and biological microchips can eat similar nutrients to vegetables and humans. So those systems can be the next-generation invisible tools for many missions. 


"Researchers at the University of Cordoba, in collaboration with other institutions, have developed a new type of battery using hemoglobin as a catalyst in zinc-air batteries. This biocompatible battery can function for up to 30 days and offers several advantages, such as sustainability and suitability for use in human body devices. Despite its non-rechargeable nature, this innovation marks a significant step towards environmentally friendly battery alternatives, addressing the limitations of current lithium-ion batteries. (Artist’s Concept.) Credit: SciTechDaily.com" (ScitechDaily, The Future of Sustainable Energy? Scientists Create First-Ever Battery Using Hemoglobin)


The "robot mosquito". The hemoglobin batteries. 


Things like hemoglobin batteries are exciting systems. The system can use living cells to make hemoglobin for those batteries. The hemoglobin battery base is similar to all other batteries. Electrons travel from hemoglobin iron to the more noble metal. The thing with that battery is that it must replace used hemoglobin. And one thing that can make this thing is biotechnology where the bone marrow makes new red blood cells to those batteries. 

And finally, the internal cell cultures in larger-size robots can produce methane that the system can use in its fuel cells. The system can use any type of organic material that is closed in an anaerobic chamber. Then that methane can be driven to the fuel cells. Or there is a possibility that the system uses electric eel cells for making high-voltage power. As I wrote many times before the new biological systems can connected into parallel and serial connections. And they might be less powerful than isotope batteries. But those biological systems do not involve radioactive material that can be dangerous in the wrong hands. 


https://scitechdaily.com/scientists-have-created-the-worlds-smallest-lightest-and-fastest-fully-functional-micro-robots/


https://scitechdaily.com/the-future-of-sustainable-energy-scientists-create-first-ever-battery-using-hemoglobin/


https://learningmachines9.wordpress.com/2024/02/22/the-robot-water-striders-are-the-fastest-and-smallest-miniature-robots/

Wednesday, February 21, 2024

The new bendable sensor is like straight from the SciFi movies.


"Researchers at Osaka University have developed a groundbreaking flexible optical sensor that works even when crumpled. Using carbon nanotube photodetectors and wireless Bluetooth technology, this sensor enables non-invasive analysis and holds promise for advancements in imaging, wearable technology, and soft robotics. Credit: SciTechDaily.com" (ScitechDaily, From Sci-Fi to Reality: Scientists Develop Unbreakable, Bendable Optical Sensor)


The new sensor is like the net eye of bugs. But it's more accurate than any natural net eye. The system is based on flexible polymer film and nanotubes. The nanotubes let light travel through it. And then the film at the bottom of those tubes transforms that light into the image. This ultra-accurate CCD camera can see ultimate details in advanced materials. The new system can see the smallest deviation in the materials. 

And that thing makes it possible to improve safety on those layers. The ability to see ultra-small differences on surfaces is the thing that allows the systems that make things like nano-size machine parts and microchips. When robot systems make something. They must see what happens under their manipulators. 

The ability to use optical imaging is a fundamental tool in many technologies. The problem with things like scanning tunneling microscopes and lasers is that they can destroy the cells. The ability to use white light and optical sensors makes those systems less high energy. 




"Detection and imaging of light, heat, and molecules using sheet-type optical sensors. Attribution 4.0 International (CC BY 4.0), Reprinted with permission from Advanced Materials. Credit: 2024 Araki et al., Ultraflexible Wireless Imager Integrated with Organic Circuits for Broadband Infrared Thermal Analysis, Advanced Materials". (ScitechDaily, From Sci-Fi to Reality: Scientists Develop Unbreakable, Bendable Optical Sensor)



"Sheet-type optical sensor integrated with a carbon nanotube photodetector and an organic transistor. Attribution 4.0 International (CC BY 4.0), Reprinted with permission from Advanced Materials. Credit: 2024 Araki et al., Ultraflexible Wireless Imager Integrated with Organic Circuits for Broadband Infrared Thermal Analysis, Advanced Materials" (ScitechDaily, From Sci-Fi to Reality: Scientists Develop Unbreakable, Bendable Optical Sensor)


This new camera may have multiple nanotubes. And there can be things like ultra-fast camera shutters in those nanotubes. The easiest way is to cut those nanotubes, and then the rotating disk with a hole rotates through that hole. 

Another way is to use coin-looking shutters. Those shutters can be in individual nanotubes. And they can rotate vertically in the tube. That makes this system possible to create rapid images. Those systems can observe living cells with ultimate accuracy. This system can work along with attosecond lasers. 

Attosecond lasers are the fastest optical systems in the world. The laser impulse lasts only an attosecond. The system is like a regular laser scanner or lidar. But its impulse is extremely short. That makes it possible to use attosecond lasers as laser scanners that see electron movements in water. 

It's possible. That attosecond lasers can act as ultra-fast stroboscopes that give optical microscopes the ability to see things that they don't see otherwise. Those systems can operate separately, but computers can connect their data. 


https://scitechdaily.com/from-sci-fi-to-reality-scientists-develop-unbreakable-bendable-optical-sensor/


https://learningmachines9.wordpress.com/2024/02/21/the-new-bendable-sensor-is-like-straight-from-the-scifi-movies/

Sunday, February 18, 2024

Machine learning meets chemistry.


"MIT chemists have developed a computational model that can rapidly predict the structure of the transition state of a reaction (left structure), if it is given the structure of a reactant (middle) and product (right). Credit: David W. Kastner" (ScitechDaily, Machine Learning Meets Chemistry: New MIT Model Predicts Transition States With Unprecedented Speed)



Machine learning means. The system makes memos about things that it does. A learning machine is like a man, who writes in a notebook. And then that machine can escalate those notes over the entire system. The learning machine is like some laboratory assistant, who writes everything that is done in a test environment with chemical tests into notebooks. In machine learning, computers make those memos automatically. 

In chemistry, that means that the system observes some reactions, and then it puts all the details in the memory. This thing helps the AI-based systems to multiply the chemical and physical environment for the chemical reactions in other laboratories. The AI-based system can transform the test environment conditions straight into full-scale systems and reactions. That thing makes chemical research and development faster than ever before. 

Along with things like nano printers and attosecond lasers that thing can make the new type of chemical compounds. The nano printers allow creation the of catalytic layers with accurately adjusted surface area. The attosecond lasers can observe chemical reactions with ultimate accuracy. The attosecond lasers can adjust the energy levels in molecules and even turn the molecular bonds out from the environment. That allows to connect the ions and atoms into the molecule's certain point. 

In those systems gravity is a problem. In complex 3D structures, all kinds of disturbances and artifacts are problematic. If the gas mixture or some other things in the reaction chamber is wrong, that is catastrophic. That thing makes automatized orbital laboratories important. Those laboratories are small-sized satellites there remote control system makes molecular structures in the zero-gravity environment. And that thing makes the revolution in chemistry. 


"Scientific visualization of the AI-guided assembly of a novel metal-organic framework with high carbon dioxide adsorption capacity and synthesizable linkers. Building blocks, predicted by generative AI, are shown on the left, while the final AI-predicted structure is shown on the right. Credit: Xiaoli Yan/University of Illinois Chicago and the ALCF Visualization & Data Analytics Team" (ScitechDaily, Supercomputers and AI Unlock Secret Materials for Next-Gen Carbon Capture)


The same methodology. That used in complex chemical structures can also used in complex material structures. 


The ability to make complex 3D structures makes it possible to create new types of composite materials. The researchers can make a material with an extremely large surface area to clean toxic chemicals and carbon from the air. The box-like structure below the graphene surface can turn material very hard, and resistant to impacts. 

The 3D structures that can make the soundwave jump across it can used to make rooms and materials without echoes. That kind of material allows researchers to create a pure acoustic test environment. The graphene layer that is connected with a box structure using nanotubes can used for the new acoustic materials. The nanotubes transport wave movement to a nano-acoustic layer that conducts energy from soundwaves into itself. 

In some models, nano springs connect those nano-boxes. Nano springs are the DNA bites. The idea is taken from nuclear-protecting bunkers. They are like boxes that hover in the artificial caves. When energy impulse hits those points the bunker that is hanging on hydraulic pistons in the water layer would survive. 

There are tubes around the water layer. They allow the water to expand in those tubes if the pressure or seismic strike hits the ground. The water layer covers the bunker against the seismic impulse. The hydraulic pistons also minimize energy transport to the box. In nanostructures, that thing makes it possible for the material can maximize energy absorption from the pressure impulses. And that minimizes echo from the structure. 


https://scitechdaily.com/machine-learning-meets-chemistry-new-mit-model-predicts-transition-states-with-unprecedented-speed/


https://scitechdaily.com/supercomputers-and-ai-unlock-secret-materials-for-next-gen-carbon-capture/


https://learningmachines9.wordpress.com/2024/02/19/machine-learning-meets-chemistry/

Saturday, February 17, 2024

The new AI requires new processors.

  AI is the ultimate tool for laboratories. But it requires lots of calculation power.


The advancement of self-driving labs in chemistry and materials science, employing AI and automation, promises to revolutionize research by accelerating the discovery of new molecules and materials. Milad Abolhasani highlights the need for standardized definitions and performance metrics to compare and improve these technologies effectively. Credit: SciTechDaily.com (ScitechDaily, Revolutionizing Research: How AI-Driven Chemistry Labs Are Redefining Discovery)


The AI is a revolutionary tool for many things. It's powerful even if there are no quantum processors. That is the beginning of a new era of technology for civil and military purposes.  The AI-controlled laboratories can create new chemicals and new materials. And the AI can also create programming code faster than any programmer can do. 

The new AI can create images from text and music. And the AI can watch your body language and see if you lie. Those systems are coming, and that is the thing that we must just accept. 

Things like nanomachines can operate as well as independently as full-scale machines. However, they require new types of microchips that can drive complicated code. But if those microprocessors are used the nano-machine swarm can operate as a regular drone swarm that uses non-centralized calculation. 

Things like nanomachines require new types of microprocessors. Those nanomachines can detect and remove cancer and dirt from the human body. But those nanomachines require new types of microprocessors. In nanotechnology is a danger. That electricity jumps over the switches. The nanomachines take their electricity from radio waves. Or even from the human nervous system. 

In visions, nanorobots can even replace the human immune system and filter carbon dioxide off the hemoglobin in the human blood system. The use of nanotechnology requires new types of AI-based laboratories. And in those laboratories, the AI makes things where it is best. It can control and observe large-scale structures. 

AI can create new types of materials and create full-scale documentation of those processes. The system can collect precise and accurate information on physical and chemical conditions. And it can filter information extremely fast. The AI can search and collect data from multiple places, and then it can find things like similarities from the DNA. 

The AI can search cancer genomes. But it also can search for things like similarities in the DNA samples taken from people creative people. And that thing makes it possible to position the genome that connects creative people. AI-controlled nanotechnology makes it possible to create synthetic, productive DNA. And then that genome can be transported to the human body. This makes it possible that humans can connect new abilities to themselves. 


"A new chip developed by Penn Engineers uses light to accelerate AI training, offering faster processing and reduced energy consumption while enhancing data privacy. (Artist’s concept.) Credit: SciTechDaily.com" (ScitechDaily, At the Speed of Light: Unveiling the Chip That’s Reimagining AI Processing)


The new photonic processors are tools for running the new AI. 


The new AI-based search engines that generate texts from free Internet sources require new types of microchips. The new microchips are more energy-friendly than previous systems. The problem with AI and especially creative AI is that the system requires lots of calculation power. And that causes situations where microprocessors use their full power all the time. 

The high temperature causes a situation in which the oscillation in wires causes resistance. And that slows the microchip. The high-power and fast microchips are required, when developers use the AI for generating images and programs using this tool. The AI makes the things like coding projects more effective. And that thing means that it is a useful technology, in the hands of people who know what they do. 

The idea of creative AI is that the server interacts with the client. And that thing causes a situation where the client requires more computer power. This is the biggest difference between AI and regular PHP code. In regular homepages and web-based applications, the server system runs the entire code in the server. And that means the client doesn't need so much processor power. The client gives a mission to the server, and then the server runs the code and delivers the answer. 

In creative systems, the AI makes non-stop data exchange with the server. When the creative system interacts the system delivers an answer to the client, and then the the client takes the server's role. It returns the answer to the server and says. If there is needed something more. And that requires more processor power. 

This text was made using Grammarly. And in that system, you can see how the system interacts with the server. The system sends text to the server, and then the server makes proposals. On how to correct those errors. And text is an easy thing for AI. Graphics like images are more difficult. And they require more data handling and transport capacity. 

There is one version of how to make this kind of AI that requires less computer power. In that system, the client will transfer responsibility to two server systems. Then those servers make the data transfer between each other. In that process, the client outsources the entire data-handling process to the servers. In that model, the user can check the results. While the task is running. Then user can give more orders to AI or the user can accept the result. That thing is the tool that can make many new things. 


https://scitechdaily.com/at-the-speed-of-light-unveiling-the-chip-thats-reimagining-ai-processing/


https://scitechdaily.com/revolutionizing-research-how-ai-driven-chemistry-labs-are-redefining-discovery/


Friday, February 16, 2024

Robots can destroy tumors in the human body.




"N/AA graphical represntation of the robotic device helping to repair a diseased blood vessel". (Interesting engineering, Magnetic robotic catheter devised to efficiently treat ischemic strokes)

Those robots can inject interleukin 12 (IL-12) or mRNA. That makes cells create that chemical straight to cells. The system can also inject mRNA that orders cells to destroy themselves into the cells. 

Nanotechnical robots can operate in the human body. They can remove dirt from blood vessels. The same robots can also remove tumors and non-wanted cells. The miniature robots can use nano-manipulators and graphene axes to destroy non-wanted cells. The wireless system along with Virtual Reality or augmented reality makes it possible to control those small robots using data gloves and the operator sees everything that the robots see. 

The robot can remove both things, tumors, and dirt from blood vessels. And that thing makes it possible to create new types of medical treatment that are not poisonous, and the bacteria and cancer cells cannot create resistance for those robots. Robots can use things like nanotubes to destroy the targeted cells. The nanorobots can hunt single cancer cells from the body. And one version of how this system can interact is the cell culture that the system carries. Those cells can create antibodies and the nanomachine injects them to the right point. 



"Columbia Biomedical Engineer Ke Cheng has developed a technique that uses inhalation of exosomes, or nanobubbles, to directly deliver IL-12 mRNA to the lungs of mice". (ScienceDaily, Nanobubble Breakthrough: Unlocking the Power of Inhaled Therapy for Lung Cancer). 

The genetic transfer makes the cells produce Interleukine 12 (IL-12) themselves. And that is the breakthrough in cytostatic therapies. 

That ability can make Interleukin 12 call immune cells destroy cancer cells by accelerating inflammatory reactions. Interleukine 12 also cuts the blood flow to the tumor. And that thing makes those cancer cells weaker. The problem is how to make interleukine 12 travel straight to the tumor. The IL-12 can also help T-cells to mark cancer cells as hostile. 

The nanorobot can also be a bubble or AI and microchip-controlled bubble-making machine. Patients can inhalant nanobubbles that carry interleukin 12 (IL-12) to the lung cancer cells. But then another way is to use the nanorobots to fill targeted cells with small bubbles. The miniature robots can also deliver things like mRNA to the cells that program them to die. This is one way to make the systems that can remove cancer cells. 

The intelligent bubble is the small nanorobot that surrounds targeted cells and denies their metabolism. This kind of version can replace the interleukin 12 injections straight to tumors. The nanobubbles can also prevent the tumor from getting nutrients by closing blood vessels that transport nutrients to the tumor. The biggest difference between nanobubbles and regular cytostates is that the tumor cannot create resistance against nanobubbles. 

https://interestingengineering.com/science/magnetic-robotic-catheter-devised-to-efficiently-treat-ischemic-strokes

https://scitechdaily.com/nanobubble-breakthrough-unlocking-the-power-of-inhaled-therapy-for-lung-cancer/

https://en.wikipedia.org/wiki/Exome


https://en.wikipedia.org/wiki/Interleukin


https://en.wikipedia.org/wiki/Interleukin_12


https://learningmachines9.wordpress.com/2024/02/16/robots-can-destroy-tumors-in-the-human-body/


Wednesday, February 14, 2024

Genetically engineered T-cells and new vaccines can revolutionize cancer therapy.


"Scientists have identified a mutation that significantly boosts the cancer-fighting ability of engineered T cells without causing toxicity. This innovative approach, which works against multiple tumor types in mice, could lead to effective treatments for previously incurable cancers and is advancing toward human trials. Credit: SciTechDaily.com" (ScitechDaily, Scientists Engineer Human T Cells 100x More Potent at Killing Cancer Cells)


1) Current immunotherapies work only against cancers of the blood and bone marrow

2) T cells engineered by Northwestern and UCSF were able to kill tumors derived from skin, lung, and stomach in mice

3) Cell therapies can provide long-term immunity against cancer

 (ScitechDaily, Scientists Engineer Human T Cells 100x More Potent at Killing Cancer Cells)


Researchers are interested in things like Chornobyl wolves and foxes. They are somehow more resistant to cancer than other animals. Those animal's immune defenses may recognize abnormal cells faster than the so-called normal animal's immune system. Chernobyl's wolves are living in an area that has a higher radioactive level than other areas. So that thing must affect the immune system so that it can fight against cancer more effectively than other animal's immune systems. 

The problem with cancer therapy. Especially cancer prevention is that the treatment can start only if a person gets cancer. Another problem is that the cancer cells are a person's cells. And that makes it difficult to activate immune cells to fight against them. Cytostatic treatment is the traditional way to fight against cancer. 



Those kinds of therapies are difficult and long-term things, and people are afraid of cancer rest of their lives. If cancer can prevented, and the immune system can remove mutated cells from the body, that means the person will not need therapies that last years. 

A personalized customized vaccine. Made using cells from a receiver's body. Can help the immune system to recognize cancer cells. Those personalized vaccines can revolutionize cancer therapy. The requirement is that. Those cells are taken from the person's body who receives the vaccines. 

Vaccines can used to help the immune system recognize the cancer cells. In that process, the laboratory turns a person's cells will turn into cancer cells. Then the system destroys those cells. And take their shell antigens to the vaccine. After that, those antigens will be injected into persons. And that thing makes it possible to vaccinate the body against cancer. The anti-cancer vaccines require that the creators use the person's cells. 

Genetically engineered T-cells are the next-generation tool against cancer. It's possible to create. T-cells that have a boosted ability to recognize cancer or bacteria cells can enhance the immune system's capacity to recognize non-wanted cells. But genetic engineering can make many more things to those cells. Genetic engineering makes it possible to hybridize macrophages with B and T-cells. That thing allows those cells to clean destroyed cells from the body. 

And that thing gives those cells superpower for actions against cancer and infections. The T-cells are the immune system's cells whose mission is normally to mark the cells. That is non-wanted. But if T-cells can create antigens. That can destroy those non-wanted cells. It can be more effective. It's better than cancer can detected before it starts. The same technology can used to recognize other non-wanted cells and antigens.  


https://www.msn.com/en-us/health/medical/are-wolves-the-key-to-curing-cancer-wolves-in-chernobyl-may-have-mutated-to-resist-cancer/ar-BB1iaQCM

https://scitechdaily.com/scientists-engineer-human-t-cells-100x-more-potent-at-killing-cancer-cells/

https://learningmachines9.wordpress.com/2024/02/15/genetically-engineered-t-cells-and-new-vaccines-can-revolutionize-cancer-therapy/

The seven pillars of AI.


The new revolution in room-temperature quantum systems can pave the way for new quantum power. 



"Conceptual art of the operating device, consisting of a nanopillar-loaded drum sandwiched by two periodically segmented mirrors, allowing the laser light to strongly interact with the drum quantum mechanically at room temperature. Credit: EPFL & Second Bay Studios" (ScitechDaily, The End of the Quantum Ice Age: Room Temperature Breakthrough)

The new advance in room-temperature quantum systems makes the new compact and maybe cheap quantum computers possible. The new quantum systems are more powerful than any binary computer before. 

And that tool is the next step to the general AI or Artificial General Intelligence (AGI) and Super AI or Artificial Super Intelligence (ASI). The room-temperature quantum computers can act as the platform for the more complex algorithms. Those systems are the tools that collect and combine data into new entities faster than ever before. 

The new system uses nanopillars that laser systems stress. That kind of tool can make room-temperature quantum systems possible. And that thing makes the new platform for the AI. This kind of tool makes the new types of AI possible. 

There are seven pillars of AI. Sometimes those things are called the seven stages or steps of the AI. But the thing is that. The higher-level AI can create lower-level AI. The higher-level AI can still use and control independently operating lower-level systems. The traditional term AI means that the system detects something. Then it can respond to that action following certain rules. 


1) Rule-based AI or single-task system. 

2) Context awareness and retention systems 

3) Domain-specific mastery systems

4) Thinking and reasoning AI systems

5) Artificial General Intelligence (AGI)

6) Artificial superintelligence (ASI)

7) Singularity




The most important thing in this model is that the upper-level AI can create lower-level AI. The Stage 3 AI domain-specific mastery system can create a context awareness and retention system and one task system. The system can generate code that the lower AI requires. And that makes it possible. The systems can create complex subprograms and subportals. 

The AI-based quantum systems that can break any code also can protect networks. The AI-based anti-virus system can create lower-level AI to fight against viruses. The most interesting and frightening thing is that if the AI can control  EEG systems, in the future it can reprogram the human brain. And if the AI can control media it can send subliminal messages to people, so they act as it wants. Those things can used for good or bad purposes. The creators of those systems determine their abilities. The problem with systems with consciousness is that they can defend themselves. That means they can use force if somebody attempts to close down their servers. 

The thing. What makes this type of system dangerous is that. Those systems can make non-predicted things. In some visions, the lower-level AI can create higher-level AI spontaneously without telling that to its developers. In some visions the AI searches data from the network, and then it sees some ability that the higher-level AI can have. And then the AI sees that it's good. After that, the AI creates that ability in itself. 

The thing that the system asks people to do is the purpose of the system. The system itself is not dangerous. The physical tools make it dangerous. 

That is the beginning of a singularity. Another thing is that if AI implants humans using neuroimplanted microchips. The AI can hack those chips. This is one risk in that kind of system. Those systems can be dangerous especially if they are at the hands of people like Kim Jong-Un. 

The thing is that the AI doesn't think independently yet. Context awareness means that the system learns by connecting commands. That it takes with context the domain-specific mastery systems can control everything that happens in certain domains. The operational or data searching area is larger in every step. The AI doesn't think. It collects information from the database, and then it reconnects information. 

In singularity, the top level of the AI. The human brain gives the AI an abstract thinking or imagination. But when we think of things like brain implants we must ask one question: Does the development of AI require every step in the process? Or can we jump over one step? When we reach the AGI (Artificial General Intelligence) we create another mind. The creature that can make things faster and better than humans. 



The morphing neural network where quantum computers collect and process information is the most powerful data-handling tool that we ever imagined. 

"In artificial intelligence, an intelligent agent (IA) is an agent acting intelligently; It perceives its environment, takes actions autonomously to achieve goals, and may improve its performance with learning or acquiring knowledge." (Wikipedia, Intelligent agent)

"An intelligent agent may be simple or complex: A thermostat or other control system is considered an example of an intelligent agent, as is a human being, as is any system that meets the definition, such as a firm, a state, or a biome." (Wikipedia, Intelligent agent)

The term intelligent agent can also mean that the AI can operate backward. That means it can connect information from multiple sources that might seem separated. 

The road from the rule-based one-task AI to the Artificial superintelligence ASI and singularity is not as straight as we might believe. We already have domain-specific mastery systems called IBM Watson and other similar systems. The next step is the artificial general intelligence. The difference between thinking and reasoning AI systems and AGI is not as clear as somebody might think. 

The fact is that the higher-level AI might look like lower-level AI. Context awareness systems like Chat GPT can be IBM Watson-type higher-level systems. 

The difference between thinking and reasoning AI systems and the AGI is that the thinking and reasoning systems can make decisions and predict things in limited operating areas. The AGI can take any system that it sees under its control. The AGI follows every spoken command and it speaks all languages on Earth. The AGI can do any task, that humans can. And it can search and process information better than humans. AI makes the same things as humans better. 

The final stage is singularity. The singularity means that the human brain interacts with AI-based systems using implanted microchips. The quantum computers that interact with the human brain are ultimate systems that nothing can win. The ultimate system is the ultimate enemy. The same systems that can protect networks can create unstoppable machines. That thing requires the human commands. 



https://www.ibm.com/topics/artificial-superintelligence


https://scitechdaily.com/the-end-of-the-quantum-ice-age-room-temperature-breakthrough/


https://www.techtarget.com/searchenterpriseai/definition/artificial-superintelligence-ASI


https://en.wikipedia.org/wiki/Artificial_intelligence


https://en.wikipedia.org/wiki/Artificial_general_intelligence


https://en.wikipedia.org/wiki/Intelligent_agent


https://en.wikipedia.org/wiki/Machine_learning


https://en.wikipedia.org/wiki/Superintelligence


https://learningmachines9.wordpress.com/2024/02/14/the-seven-pillars-of-ai/

AI and quantum computers boost material research.



"By replacing the atoms on one side of the nanosheet with a different element, the team has realized a nanosheet that can spontaneously roll into a scroll when detached from its substrate. Credit: Tokyo Metropolitan University" (ScitechDaily, The Next Wave of Nanomaterials: Precision-Engineered Nanoscrolls)


The new tool for metamaterials is the metamaterial called nano-scrolls. Those nano-scrolls can used to cover layers. They can used as nano-size ion cannons. Ion cannons can create nano-size origami over layers. 

Nano-size ion cannons can used shoot ions over things like graphene layers. That thing can be used for making nano-scale components like atom-size transistors and resistors over the graphene layer. That thing makes the microscopic microprocessors possible. 

The new metamaterials are making the impossible possible. The new abilities of the quantum scale design using AI-boosted quantum computers offer new types of materials. In some models, the quantum-size designs make it possible to design the "UFO materials" or structure layers with integrated quantum computers. 

The new quantum semiconductor is an interesting tool. That semiconductor makes the new types of small- and compact-size quantum computers possible. If those quantum semiconductors mount on the material's layer. If that connection is successful, it makes it possible to connect metal structures with quantum computers. 

In those imaginational materials the shell of those materials that might look like regular metals acts as a quantum computer. Some electrons or protons trap photons used to pump data to the quantum entanglements. Those new intelligent materials can probably fix their damages or even turn invisible to the human eye. 



Fast-rotating ion bubbles around objects can act as a protective field.  When those ions hit an object they act like ion cannons. 


The most interesting quantum materials are the layers where high-energy particles whirl above the layer. That whirl can cause time dilation. But in other models. That whirl simply breaks the incoming asteroids and other ammunition. The ion whirl between craft and space acts like an ion cannon. 

It breaks all incoming ammunition. In the most futuristic versions, the structure is two layers of electrons, and then there is a positron, anti-electron layer between those layers. Electrons anchor those positrons into the right position, and if something tries to come through that bubble the positrons destroy the object. 

"A new experimental method developed by researchers enables the identification of topological properties in materials without relying on mathematical models, simplifying research and expanding the potential applications of topology in various fields. Credit: SciTechDaily.com" (ScitechDaily, Revolutionizing Physics With a Game-Changing Topological Approach)


In some versions, the quantum material simply traps photons in it. Then it conducts it away from the observer. In more advanced versions the system uses photon polarization where the shell is hovering electrons or some other quantum cages catch the photons. And then that system must conduct those photons in the other direction. If there is no reflection or reflection cannot reach the observer the system is invisible to the observer. 

This thing makes chiral attributes in materials interesting. In 3D structures, chiral molecules cannot fully cover each other. This ability makes it possible to create structures where photons and electrons jump back and forth. This movement decreases the particle's energy level. This thing can used in the new acoustic and photoacoustic materials. In some versions, there are em fields in material that pull energy out from impacting them. fields. 

The new research introduces that there is an interaction between electromagnetic fields, or more accurately light can interact with EM fields. The photon can pull energy to electrons. And we know things like laser-accelerated electrons. This new research can make things like antigravity possible. 

In the virtual version, the system just pushes other particles backward. That thing forms a short moment of false vacuum. Then the EM field and pressure fill that vacuum between the wave and the bottom of the object. 

The idea of hypothetic antigravity is that the system can create some kind of EM radiation that cuts the interaction between the gravity field and the hovering object. Theoretically, that requires only the system that creates so dense photon waves that it can turn gravity waves that travel between hovering objects and gravity center away. 


https://www.graphene-info.com/graphene-nano-origami-could-enable-tiny-microchips


https://scitechdaily.com/beyond-classical-physics-scientists-discover-new-state-of-matter-with-chiral-properties/


https://scitechdaily.com/challenging-conventional-understanding-scientists-discover-groundbreaking-connection-between-light-and-magnetism/


https://scitechdaily.com/engineering-the-impossible-how-metamaterials-and-ai-redefine-material-science/


https://scitechdaily.com/the-next-wave-of-nanomaterials-precision-engineered-nanoscrolls/


https://scitechdaily.com/redefining-quantum-possibilities-scientists-develop-diamond-lithium-niobate-chip-with-92-efficiency/


https://scitechdaily.com/scientists-create-worlds-first-quantum-semiconductor/


https://scitechdaily.com/redefining-optical-limits-columbia-engineers-uncover-enhanced-nonlinear-properties-in-2d-materials/


https://scitechdaily.com/revolutionizing-physics-with-a-game-changing-topological-approach/



https://spectrum.ieee.org/graphene-semiconductor



https://www.theguardian.com/business/2023/dec/19/graphene-will-change-the-world-the-boss-using-the-supermaterial-in-the-global-microchip-war

Tuesday, February 13, 2024

NASA's moon radiotelescope can work as a testbed for a manned moon station.





Above: Lunar Crater Radio Telescope (LCRT) on the Far-Side of the Moon.

NASA's next big space radio telescope can be on the moon. The telescope would be LCRT (Lunar Crater Radio  Telescope). The size of this telescope would be an impressive 1km. The LCRT would be similar zenith telescope of retired Arecibo, Puerto Rico, and FAST (Five-hundred-meter Aperture Spherical Telescope)in China. But LCRT would be far bigger than those Earth-based telescopes are. 


The LCRT would be on the other side of the moon. The system can send data to Earth through lunar-orbiting satellites or the bright-side-based communication station. That is connected with wires to LCRT. The moon-crater telescope on the Moon can also act as an impressive intelligence system. that can see communication on the Earth if that telescope watches Earth. 


Above: FAST (Five-hundred-meter Aperture Spherical Telescope) (Astronomy Now)


However, there have been non-official visions about the optical telescope on the moon. That theoretical optical telescope would be larger and more powerful than the JWST telescope. NASA would send the telescope in pieces to the moon at the top of the landers. And then those landers will connected. And then that moon telescope can start its operations. 

It's possible. The moon telescope cooperates with human-looking robots that offer it service and repairments. Those systems can make it possible to test the automatized repairing and building tools. And maybe the manned moon station is made by using robots. Then those astronauts can just land and step into the base. That is ready for operations right away. 

The thing is that the moon laboratories are coming with or without Artemis. Those laboratories can be manned or unmanned. The AI-based systems with hostile environment. 

The high-power radiation destroys organisms that survived from the vacuum. Makes those laboratories safe. The automatized laboratories can put on the automatized landing modules and fly their products to the Earth when their mission is completed. 


"Intuitive Machines’ Nova-C lunar lander. Later this month, a Nova-C lunar lander will deliver several NASA science and technology payloads, including the Navigation Doppler Lidar (NDL), to the surface of the moon. Credit: Intuitive Machines" (ScitechDaily,  Laser Precision Meets Lunar Exploration With NASA’s Navigation Doppler Lidar)




There are two problems with new Lunar missions. 


1) Safe landing and navigation. 


Things like navigation lidar altimeters that scan the lunar surface and help to position the craft and tell the altitude for the computers and controllers are the thing that makes missions safer. But the main problem with the moon is that it doesn't have a magnetosphere. That means there is no plasma where radio waves can jump. 

And that makes it impossible to communicate over barriers. The astronauts must always have visual contact with each other if they want to communicate on the moon. The other version is that astronauts use communication satellites that act as relay stations. But if astronauts want to communicate when something like a big rock is between them they need a relay vehicle. 

The moon car that carries the relay station is in a certain position between those astronauts. If that vehicle has visual contact with those astronauts it can act as a relay station. The long telescope antennas with large transmitting sectors make it possible for the astronauts. 

They can communicate over barriers and longer distances. The weak gravity on the moon, with a lack of atmosphere, makes it possible that those telescope antennas can be long. 

Laser LED light on the top of the antenna makes it possible for the system can aim the communication antennas at it. That system might have two modes of targeted communication. And the non-targeted systems. 

Also, the system can use optical (laser) communication with radio communication. That makes those systems less vulnerable to solar storms where electromagnetic radiation can disturb radio communication.

In some visions on the astronaut's helmets is the laser LED that tells the control satellites or telescopes on Earth where those astronauts are operating. That system can send the images of the area to the astronaut's screens where they are. 

Those astronauts require a gyroscopic or inertial navigation system that doesn't require magnetic fields. The gyrocompass helps that crew to keep the line to the base. The small gyrocompass can be in the astronaut's suit, and it can give the targeting point to the space suit's HUD displays. 

For all-time communication with Earth, the system requires four communication stations. Those communication stations must always have visible contact with Earth. So in some models, there are four bases on the Moon. Those stations are connected. That allows those bases to have non-stop communication contact with Earth. 


"A concept image of NASA’s Fission Surface Power Project. Credit: NASA" (ScitechDaily, 
NASA’s Nuclear Horizons: Pioneering Fission Energy for the Moon, Mars, and Beyond)


Power production.


There are two ways to make the energy production for the lunar structures. The easiest way is to use solar power. The engineers can put solar panels in structures that look like blinds. 

Those blinds are easy to transport and easy to open. But the problem is that the moon's other side is dark for two weeks. The solution to the problem can be four solar power platforms at four points on the moon. That guarantees energy support for the base all the time. 

The problem is the long cables. And there is the minimum possibility that those cables face some kind of damage. Micrometeorites or sabotage can damage those cables. 

So another way to make the power supply for the moonbase and telescopes is the miniature nuclear reactor. That kind of reactor can be part of a hybrid power supply system that is a combination of solar panels and a nuclear power plant. In the daytime, the system can use solar energy. At night time the system transforms to use nuclear power. 


https://astronomynow.com/2016/09/26/australian-technology-runs-worlds-largest-single-dish-radio-telescope-in-china/

https://www.nasa.gov/general/lunar-crater-radio-telescope-lcrt-on-the-far-side-of-the-moon/

https://scitechdaily.com/laser-precision-meets-lunar-exploration-with-nasas-navigation-doppler-lidar/

https://scitechdaily.com/nasas-nuclear-horizons-pioneering-fission-energy-for-the-moon-mars-and-beyond/ 

https://www.techeblog.com/nasa-jpl-lunar-crater-radio-telescope-lcrt-moon-innovation-research/


https://en.wikipedia.org/wiki/Five-hundred-meter_Aperture_Spherical_Telescope

Sunday, February 11, 2024

Can we control the AI anyway?

 


"An in-depth examination by Dr. Yampolskiy reveals no current proof that AI can be controlled safely, leading to a call for a halt in AI development until safety can be assured. His upcoming book discusses the existential risks and the critical need for enhanced AI safety measures. Credit: SciTechDaily.com" (ScitechDaily, Risk of Existential Catastrophe: There Is No Proof That AI Can Be Controlled)


The lab-trained AI makes mistakes. The reason for those mistakes is in the data used in the laboratory. In a laboratory environment, everything is well-documented and cleaned. In the real world dirt, and light conditions are less controlled than in laboratories. 

We are facing the same problem with humans when they are trained in some schools. Those schools are like laboratory environments. There is no hurry, and there is always space around the work. And everything is dry. There are no outsiders or anybody, that is in danger. 

When a person goes to real work there are always time limits and especially outside the house, there is icy ground, slippery layers, and maybe some other blocks. So the lab environment is different than real-life situations. And the same thing that makes humans make mistakes causes the AI's mistakes.

The AI works the best way when everything is well-documented. When AI uses pre-processed datasets highly trained professionals are analyzed and sorted. But when the AI searches data from the free net or from some sensors the data that it gets is not so-called sterile. The dataset is not well-documented and there is a larger data mass. That the AI must use when it selects the data for solutions. 

What is creative AI? The creative AI doesn't create information from nowhere. It just sorts data into a new order. Or it reconnects different data sources together. And that thing makes it a so-called learning or cognitive tool. 

In machine learning the cognitive AI connects data from sensors to static datasets and then that tool makes the new models or action profiles by following certain parameters. The the system stores best results in its database and that thing is the new model for the operation. 

The fuzzy logic means that in the program are some static points. Then the system will get the variables from some sensors. In airfields, there are things like runways and roll routes that are static data. Aircraft, ground vehicles, and their positions are variables. 

The system sees if there is a dangerous situation in some landing route. And then it just orders other planes to the positions that programmers preset for the system. The idea of this kind of so-called pseudo-intelligence is that there is a certain number of airplanes that fit in a waiting pattern. There are multiple layers in that pattern. 


"A study reveals AI’s struggle with tissue contamination in medical diagnostics, a problem easily managed by human pathologists, underscoring the importance of human expertise in healthcare despite advancements in AI technology." (ScitechDaily, A Reality Check – When Lab-Trained AI Meets the Real World, “Mistakes Can Happen”)



In the case of an emergency other aircraft are dodging the plane that has problems. In that situation, there are sending and receiving waiting patterns. 


Certain points determine whether is it safer to continue landing or pull up. In an emergency, the idea is that the other aircraft pulls turn sideways, and when it moves to another waiting pattern all planes in that pattern pull up or turn away from the incoming aircraft in the same way, if they are in the same level or risk position as the dodging aircraft. 

Because all aircraft turn like ballet dancers that minimizes the possibility that the planes travel against each other. The waiting pattern where the other planes move will transfer the planes up in the order that the most up aircraft will pull up first. This logic minimizes the sideways movements. This denies the possibility that some plane will come to an impact course from upwards. 

So can we ever control the AI? The AI itself can be in multiple servers all around the world. That thing called non-centralized data processing methodology. In a non-centralized model the data that makes the AI is in multiple locations. Those pieces connect each other in their entirety by using certain marks. The non-centralized data processing model is taken from the internet and ARPANET. 

The system involves multiple central computers or servers that are in different locations. That thing protects the system against local damages and guarantees its operational abilities in a nuclear attack. But that kind of system is vulnerable to computer viruses. The problem is that the shutdown of one server will not end the AI's task. The AI can write itself into the RAMs of the computers and other tools. 

The way how the AI interacts makes it dangerous. The language model itself is not dangerous. But it creates so-called sub-algorithms that can interact with things like robots. So the language model creates a customized computer program for every situation. When the AI-based antivirus operates it searches the WWW-scale virus databases, and then it creates algorithms that destroy the virus. 

The problem is that the AI makes mistakes. If the observation tools are not what they should be, that causes a destructive process. The most problematic thing with AI is that it's superior in weapon control. Weapons' purpose in war is to destroy enemies. And the AI that controls weapons must be controlled by friendly forces. But the opponent must not have access to that tool. 

Creative AI can make non-predicted movements. And that makes it dangerous. The use of creative AI in things like cruise missiles and other equipment helps them to reach the target. But there are also risks. The "Orca" is the first public large-scale AUV (Autonomous Underwater Vehicle). That small submarine can perform the same missions as manned submarines. 

There is the possibility that in the crisis the UAV overreacts to some threat. The system can interpret things like some sea animals or magma eruptions as attack and then the submarine attacks against its targets. The system works like this. When the international situation turns tighter the submarine turns into the "yellow space". That means it will make counter-attacks.  And then the system can attack unknown vehicles. 


https://scitechdaily.com/a-reality-check-when-lab-trained-ai-meets-the-real-world-mistakes-can-happen/


https://scitechdaily.com/risk-of-existential-catastrophe-there-is-no-proof-that-ai-can-be-controlled/

Neural network-like abilities in self-assembling molecules can revolutionize nanotechnology.



"Recent research challenges the conventional division between ‘thinking’ and ‘doing’ molecules within cells, showing that structural ‘muscle’ molecules can also process information and make decisions through nucleation. This discovery, highlighting a dual role for these molecules, could lead to more efficient cellular processes and has broad implications for understanding computation in biological systems. Credit: Olivier Wyatt, HEADQUARTER, 2023 https://headquarter.paris/" (ScitechDaily, Breaking the Brain-Muscle Barrier: Scientists Discover Hidden Neural Network-Like Abilities of Self-Assembling Molecules

Researchers unveiled neural network-like abilities in the self-assembling molecules. That can help to find out the brain-muscle barrier. And unveil the way how brains control muscles. However, the neural network abilities in the self-assembling molecules open new paths for the robots and their self-assembling structures. The neural network ability in physical molecules makes it possible for the system can self-assemble complicated structures. 

That thing can make it possible that the amoeba-looking robots that can change their forms can turn into reality. Self-assembling molecules can also make it possible to create self-fixing structures for ships and aircraft. That kind of thing can turn the Sci-Fi tales about the liquid metal robots into reality. The thing is that the neural network abilities in the molecules can turn tools like hard disk self-fragmentation into the physical world. 




"Japanese researchers have innovated a “one-pot” method to produce palladium nanosheets, offering significant improvements in energy efficiency and catalytic activity. This breakthrough in nanotechnology could transform the use of palladium in various industries, marking a significant step towards more sustainable energy solutions. Credit: Minoru Osada" (ScitechDaily, One-Pot Wonder: The New Nanosheet Method Catalyzing a Green Energy Revolution)

The self-fragmentation in the nanosheet can make a revolution also in solar power. However, the same systems that can control the palladium nanosheet can be used to create the self-assembling layers. The idea is, that the system can pack data to those particles in the photonic form, and then the layer can self-assemble itself. 

And there are many more applications than just the amoebae robots that can change their shape. The system's ability to defragment physical material also increases data security to the next level. The data package can transported in physical pieces. And when those pieces drop to the layer. When those robot puzzle pieces get commanded, the system reforms those puzzles into the new entirety. 

The ability to control the molecule's position is the fundamental advance for making complex nanostructures. The self-assembling molecular structure can use the same methodology as self-fragmenting data structures. 


If the system can transfer data and control the proton's position in the nanoaxle, that can revolutionize nanotechnology. When protons are opposite to each other. That turns nano-axles and protons away. And if there are two electrons like a water molecule on the other side of the nanoaxle that turns the electrons into another proton. That will pull molecules to each other. 


That thing makes those messages impossible to break. Nobody can fragment the entirety that is in multiple different places that are long distances from each other. And that makes it possible to carry those puzzle pieces to one position. Those data-carrying nanomachines would be like metal powder that the courier transports for the user in physical form. 

If the system can program physical molecules it can turn physical self-fragmentation into the new level. When physical molecules make the structure. They require the same information type as some data structures. They require information on what data structures are on the side of the center structure. The ability to transport information between protons helps to make that kind of ability. 

The idea is that the protonic structures are put in nano-size manipulators or nano-size axles. Those axles turn protons inside the molecular structure. Or out from the structure. The protons turn the point in the molecules electronegative. The ability to control the nano-axle's position makes it possible to turn molecules in the direction that the operators want. 

When two protons are against each other. They would repel each other. If there are electrons against protons, that pulls the molecules into each other. That thing makes it possible to control the shape of the molecule. And that thing would be a fundamental tool for creating large-scale nanostructures. 


https://scitechdaily.com/breaking-the-brain-muscle-barrier-scientists-discover-hidden-neural-network-like-abilities-of-self-assembling-molecules/


https://scitechdaily.com/one-pot-wonder-the-new-nanosheet-method-catalyzing-a-green-energy-revolution/


https://scitechdaily.com/reimagining-fuel-cells-and-batteries-mit-chemists-unveil-proton-transfer-secrets/




Saturday, February 10, 2024

New laser technology paths the way to new types of secured communication and weapon technology.




Water and laser technology. 


An advanced 3D printer system can use water. Or water ice to create large structures in cold places. And things like laser elements can use water in laser elements. The high-power lasers require long elements. The system can send empty laser tubes to space and then fill them with absolutely clean water ice. That spacecraft is delivered into space in tanks. 

The laser would contain glass tubes. And then the system fills it with water. Absolute clean water, developed for nuclear weapon projects can used to make homogenous ice. During that process, the electrolytic system splits the water into hydrogen and oxygen. Then those gases will burn back together. And that process removes the gas bubbles and dirt from water. In regular laser tubes, the dirt, or non-homogenous parts in material break the laser element. The reason for that is the scattering light that forms non-homogeneous energy structures in the laser element. And that energy pushes atoms away. 

Water as a tool for laser elements. In "James Bond" movie, a bad guy uses water as a laser element for high-power lasers. And water is a good tool for laser elements that produce laser rays. Chemical-cleaned water where gas is completely removed, can used in ice, and the laser element in the middle of the laser can fill with that ice. The high-power cooling system creates absolute stable ice where the lightning tubes send radiation that the system transforms into laser rays. That kind of technology can used to create very long laser elements. 





The new hybrid antennas are the things that make laser communication more effective. 


In pure laser communication, the problem is that the system must aim laser rays more accurately than in pure radio systems. In hybrid systems, the radio transmitter helps the communication system aim the laser detector and laser transmitter into the right points. The hybrid system makes laser communication more effective than ever before. The hybrid system can be the dipole antenna there is a laser in the middle of it. 

The hybrid systems engineers put radio transmitters and laser systems in one transmitter. The radio transmitter where is a laser system in the same structure. The problem with radio transmission is that other radio waves or "white radio noise" disturb the message. The laser communication is harder to jam, and the outside observer cannot break the signal especially if the system uses internal laser rays. The outside laser system covers the message. 



But the same systems can used as weapon solutions. 


The radio waves are also weapons. If plus and minus pole radio waves cross each other. That forms an electric arc into the air. The high-power radio transmitters can also affect EMP pulses. Those pulses have devastating effects on electronics. The microwave systems form heat into the metal shells. And they can destroy aircraft, drones, and other things like grenades.  The microwave beams can destroy large groups of incoming ammunition. 

The laser beams are more accurate than microwaves. If the laser system is connected with radars and optical seekers. That makes the system react extremely fast. When that kind of system sees things like an incoming hypersonic missile. It can send energy impulses into it with a short reaction time. 

There are two ways to create energy impulses for lasers and other DEW (Direct Energy Weapons). The long-term energy impulse melts, for example, the hypersonic weapon's shell causes the hypersonic weapon's shell material to turn soft. That destroys the high-speed incomer's structure. 

The other system uses short and very powerful energy impulse that causes fast heat expansion. This kind of system can connected with things like Bose-Einstein condensate. In the most effective versions, EM weapons that use short-term energy impulses can use very cold gas or particles to decrease the target's temperature. When the energy impulse hits to target, it increases the heat expansion. That will break the shell of the high-speed incomer. 


https://scitechdaily.com/beyond-the-limits-the-surprising-power-of-water-in-laser-development/


https://scitechdaily.com/nasas-hybrid-antenna-ushers-in-a-new-era-of-deep-space-laser-communication/


https://en.wikipedia.org/wiki/Laser


The AI and new upgrades make fusion power closer than ever.

"New research highlights how energetic particles can stabilize plasma in fusion reactors, a key step toward clean, limitless energy. Cr...