sweezyjeezy 2 days ago

Let U~Uniform(0,1) Let sensor target measurement be x, so A ~ (x + U), B ~ x or U with probability 0.5. We draw a from A and b from B, we want the estimator to minimise mean absolute error - the bayes optimal rule is the posterior median of x over the likelihood function of L(x | a, b).

Note that if a = 0 and b = 1 -> we KNOW b!=x because a is too small - there is no u with (u + 1) / 2 = 0. I'll skip the full calculation here, but basically if b could feasibly be correct its "atomic weight" ends up being as least as large as 0.5, so it is the posterior median, otherwise we know b is just noise, and the median is just a. So our estimator is

b if a in range [b/2, (b+1)/2]; a otherwise

This appears to do better than OPs solution running an experiment of 1M trials (MAE ~ 0.104 vs 0.116, I verify OPs numbers). The estimator to minimise the mean squared error (the maximum likelihood estimator) is more interesting - on the range a in [b/2, (b+1)/2] it becomes a nonlinear function of a of the form 1 / (1 + piecewise_linear(a)).

  • tennysont 2 days ago

    I was not able to replicate OP's work, I must be misunderstanding something. Based on these two lines:

    > U is uniform random noise over the same domain as P

    > samples of P taken uniformly from [0, 1)

    I have concluded that U ~ Uniform(0,1) and X ~ Uniform(0,1). i.e., U and X are i.i.d. Once I have that, then there is never any way to break the symmetry between X and U, and B always has a 50% chance of being either X or U.

    • sweezyjeezy 2 days ago

      There are two iid Uniform noise variables, the Us in A~x + U and B~x or U are independent.

defanor 2 days ago

Looks like A and B are swapped in the "errorB vs. errorA" plot's description: in other places (and on the plot itself, actually) A is the noisy sensor, B is the occasionally-faulty one.

reedf1 5 days ago
  • kavouras 5 days ago

    This doesn't work very good with the kind of sensor in the article

    • FabHK 2 days ago

      Yes, there is no state and state transition matrix, really.

      Kalman filters are good where you have a system with a state, and an estimate of the state, and you act on the system, and then you measure the outcome, repeatedly.

      Acting on the system and propagating your estimate forward "one step" increases the uncertainty of your estimate, and measuring decreases the uncertainty of your estimate.

echoangle 5 days ago

Is this just a thought exercise? Are there any sensors that actually work like the model described here?

  • tibbar 4 days ago

    While it's an idealized/toy setting, yes, these are both real categories of sensors. In particular, Sensor B, the "weird one", is just a system that has some defect/failure rate. An example might be a system that only works during the day and fails at night because it uses a camera. Or maybe a camera that's taking pictures from behind a helicopter rotor so it's frequently obstructed. Or maybe you are actually using a bunch of sensors and some of them are broken. (Of course, you'd have to tweak it a bit to get a scenario where every measurement is truly a 50% random split between meaningful and non meaningful, and you can't easily tell the difference, but as I said, this post is an idealized/toy setup.)

  • photon_rancher 4 days ago

    You could have some moving element with limit switches and an encoder, that’s pretty common. There’s probably others…

    Normally the limit switch would be reliable but it will degrade over time, could be damaged in use, be assembled wrong etc… and the encoder might not be very accurate from the get go.

    So if you want a safe/reliable system under as many conditions as possible you might get a problem space like this