In traditional systems when the error level is rising the swapper-server will change the event handlers. But the problem is, how to determine the error level that is allowing to swap to the other data handling segment. And what to do with the data processed in the other data handling segments if those segments are stuck?
In the case that this thing happens in the most powerful computers in the world, that thing can be a catastrophe. The time to get the most powerful computers in use might take very long.
And if the data is lost in the last meters. In that case, all work is done for nothing. If the memory just cleaned. Data is lost. And if the next opportunity to use the same computer can hide after months, because there are rows of researchers waiting for the opportunity to use the most powerful calculators in the world.
So that thing shows how important is the intermediate savings. If the data is lost the work that is used for the process is gone forever. And if the supercomputer is used weeks for solving some mathematical simulation loss of that data is not a very good thing. Taking that kind of system reserved for a long time is very hard. And the time of the most powerful computers is limited. Those systems are required for protein simulations and other kinds of things.
So things like decimals of π or the list of prime numbers for proving that Riemann's hypothesis stands forever are probably not the highest priority tasks for the most powerful computers in the world. It is of course interesting is there the end for the prime number series is calculated by using Riemann's conjecture?
But the most powerful computers are required for many other things. And that time of the quantum computers is same way limited. Everybody wants to get time from the most powerful calculators in the world. So that means the quantum computers are busy.
Let's return to the halting problem. The computer must stop before it can take the new mission. But how the machine itself knows if it is stuck with the loop that takes forever? When the computer is working with some problem, it cannot tell if it is busy.
And for eliminating that problem the computer must mark the certain points of the code that tells other parts of the system that it is working with "problem 127". So in this case the mark is the serial number of the program. And when the program is done the system will put the code with it that the data- or event handler is ready for the next mission.
In the systems are the values. Like the mission time to determine how long the system is allowed to try to solve the problem.
Those reports are telling that the data handling unit is busy, and if the marks are telling that the processor is working with the same problem there should be the controller that cuts the process if it takes too long.
The system sends simultaneous reports to processors that are searching how it works. The report must not send only to one receiver, because there is the possibility that the system is stuck.
When physical robots are trying to solve the "halting problem". There will be the protocol that the central computer will send the query if the system works fine. But the problem is that the data handling unit cannot tell that there is a problem.
So the central command unit can move the legs and arms of the robot for making sure that every part of the computing system is working perfectly. If the system cannot get wanted feedback it must conclude that the processor that is controlling a certain microchip is stuck.
The error level means that the system will get a certain number of unfinished tasks in a certain time unit.
There can be fixed or so-called flexible answers for that problem.
Handling error levels might base the fixed solution. That means the system will clear the memory if the satisfactory answers are not made in a certain time. Or the data handling unit is needed for the other missions that have the higher priority.
But clearing the memory means that the work that is done is gone forever. The intelligent system will store the intermediate solution at the same time when they are sending a report which is the problem, what problem they are handling. But those answers are not flexible.
In dummy systems, the data handler asks the assistance from other data- or event handlers. And if the problem is the so-called "loop that takes forever" that takes all other event handlers with it. But if the control system is asking from other databases does the problem containing the data version of the Higg's bubble.
In that case, one collapsing processor or event handler pulls all other event handlers or processors with it. But if the control system asks for help or information from other databases that will solve the problem.
Artificial intelligence can turn to handle the error levels to the next level. And that thing is suitable especially for quantum computers. But also regular computers can benefit the intelligent technology. In intelligent and flexible solutions the control system has a copy of the problem that takes a long time.
So the intelligent system can start to search data that is stored on the Internet about the problem that is hard for the system. The intelligent handler for error levels is asking for outside help. And if there is information that the problem is unable to solve, the system will order the data processor to cut work with its problem and release the event handler for the new mission.
The error-handling is important also in quantum computers. Even the quantum computer is hard to lock by ordering them to work with loops that take forever because other layers will release the stuck layer. But the fact is that if many of the layers of the quantum computer will occupy by using the things like endless decimal numbers that thing will turn the quantum computer slow.
Image: Pinterest
(https://kimmoswritings.blogspot.com/2021/08/the-problem-with-computing-is-how-to.html)
Comments
Post a Comment