You sure about that? The fourth body, being so small, can't really effect the motion of the other three; however if you have the evolution of the first three then you can determine the motion of the fourth.
Measurement devices are designed to be very sensitive to some things and very insensitive to others; for example, you want a clock to be sensitive to how much time has passed but not what the temperature or air pressure are; you want a thermometer to be sensitive to the temperature but not how much time has passed or the air pressure; and you want a barometer to be sensitive to the air pressure but not the temperature or how much time has passed.
It's easy to make a device that's sensitive to all three, like a glass jar partly full of water, upside down in a bowl of water, resting on a bed of gravel in the bottom of the bowl, so that some air is trapped inside the jar. The water level inside the glass jar will go up when the air pressure goes up and down when the air pressure goes down. But it will also go down when the temperature goes down and up when the temperature goes up, because the trapped air will expand and contract. And over time water will evaporate from the bowl, reducing the water level outside the jar, so over time the water level inside the jar will go down.
Usually metrology involves either reducing or eliminating these extra influences (a mercury barometer works the same way as the device described above, but is much less sensitive to temperature because it doesn't have any trapped air; and it's less sensitive to time because mercury evaporates very slowly, and the level of the mercury outside the tube is very low) or balancing them against one another so they precisely cancel out. Chaotic metrology would seem to require a different approach.
It's not the only possible way to do things, and sometimes it's infeasible, but it definitely makes things simpler. In theory if you have N quantities, and N measurements that all depend on all N of those quantities in known but different ways, you can usually compute the N quantities precisely from the N measurements.
For example, the standard way to make an electronic thermometer is by, more or less, measuring the current across a semiconductor diode at a given voltage. This current is an exponential function of the ratio between the voltage and a "threshold voltage" or "thermal voltage" Vt multiplied by an "ideality factor" n: I = Is (exp(V/(nVt)) - 1).
The threshold voltage Vt varies linearly with temperature (it's kT/q, depending only on Boltzmann's constant and the charge of the electron, about 25 mV at room temperature), so in a sense the current at a given voltage is a measurement of the temperature. But the ideality factor n depends on the purity of the semiconductor material (generally in the range 1.0 to 2.0), and the saturation current Is depends on the physical size of the diode junction. Moreover, the ideality factor can change over time as the diode ages. So we're in the position of simultaneously measuring the temperature, the size of the diode, and the quality of its aged semiconductor material.
The solution usually taken, as I understand it, is to measure the current through the same diode at two given voltages, one after the other, and to use a standard value for n which is good enough. Then the ratio of the two voltages tells you nVt (as long as the "- 1" is too small to matter) and from that you can calculate the temperature. In theory, by taking three or more measurements at different points in the I-V curve, you could correct for unknown n as well, but I haven't read of anyone doing this; instead, for high-precision thermometry, they use an RTD.
(Actually, you measure the voltage at two given currents, because that way you don't burn up your temperature sensing diode if the temperature is a little higher than you expected; the power dissipated then varies logarithmically with temperature rather than exponentially. But it comes to the same thing in the calculations.)
It's actually even worse than it sounds, because in fact when you measure a voltage, you're always measuring it with respect to some reference voltage, so your actual measurement is a function of the temperature, the saturation current Is, the ideality factor n, and your reference voltage Vref, which is typically subject to an error of around 2%. But you will note that the ratiometric approach described above cancels out any errors due to Vref, because the ratio of the two voltages will be unaffected by a wrong reference voltage, as long as it's the same wrong reference voltage. So you stick a big capacitor on it and take the measurements in quick succession.
All of this is, from a certain point of view, in the service of making the number you finally produce very sensitive to the temperature of the diode and very insensitive to other factors, like the battery voltage, the temperature of the rest of the thermometer circuit, the age of the components, the humidity in the air, and so on. But all of the actual physical quantities being measured on the diode are the complex mix of factors described above.
MIMO antennas or phased-array receiver antennas or microphones are another example: the signal at each antenna/microphone is a linear superposition of all the differently-phase-shifted source signals, and you process that data to get independent measurements of all the original source signals.
I wouldn't be surprised if chaotic metrology offered new ways to measure very tiny differences, but I suspect it will take a lot of time to figure out the math to make that work.
Yes, you can. You can amplify small differences due to an additional gravitational field. Then you can run a model with this disturbance included and optimize the parameters such that the difference between the observation and model vanishes. This optimization landscape is not convex however and gets more different "valleys" with time and by knowing which valley you are in you gain information there are several caviats.
If there is noise in the measurement of the system this flattens the curve meaning it is harder to distinguish which valley of the object you are in. If there is noise in system itself this noise will amplified and more and more valleys become possible with time meaning at some point the system state holds almost no information about the system.
However as long as the system runs the thing under measurement should not move as otherwise gets way more difficult and less possible to optimise the difference between the observed and the system behaviour for a certain in parameter under measurement.
In general this approach is not advisable as the chaotic system would also need parts build to enormous precision for that not impact the system more than the signal that influences the system. So it usually better to go with a decently straight forward approach as there are different systems which also amplify small differences but are a lot simpler to work with such as the measurement bridge.
Not really because they're already so chaotic you couldn't be sure what divergences from simulation were inherent and which were due to external perturbation.
Not sure what the discussion is about, but the restricted approximation only works for "insignificant" masses (e.g. a moon if talking about two big planets and a star). And even then for "short" time periods (a few million years). In larger scales even that mass will (I assume you've heard of the butterfly effect) play role.
It's been a while since I've read it, but I seem to recall that there were other planets, but they either got ejected from the system, or fell into one of the stars.
They're gonna feel so silly when they get all the way here to kill us and realize we solved their silly little problem and they have to turn right back around and go home. The looks on their translucent non-existent face things...
Not to say all the other signs of earth those seamonkey tea bag aliens craving for a better future missed with their superior technologies in Alpha Centauris immediate neighborhood.