Skip to content

observer

MRFI common used observers

An observer is a class with callback methods update(x), result(), reset(), and optional method update_golden().

Callback function update(x) will be called for each batch of inference. Observer usually accumulate its results between batchs, until its reset() is called.

To collect all result among MRFI model, use fi_model.observers_result().

To reset all observers among MRFI model, use fi_model.observers_reset().

Note

🔍 Simple observer 🔍 can run directly.

🛠️ Fault inject observer 🛠️ requires golden run before each fault inject run, in order to compare the impact of fault inject.

# fi_model has FI observers, e.g. RMSE

fi_model.observers_reset() 
for inputs, labels in dataloader:
    with fi_model.golden_run():
        fi_model(inputs)
    fi_model(inputs)
result = fi_model.observers_result()

BaseObserver()

Basement of observers.

MRFI will call these callback functions when inference. A custom observer should implement follow functions.

reset()

Reset observer when running multiple experiments.

For example, you can set initial sum value to 0 here.

update_golden(x)

Callback every model inference when mrfi.golden == True

Parameters:

Name Type Description Default
x torch.Tensor

Internal observation value, usually a batched tenser of feature map.

required

update(x)

Callback every model inference when mrfi.golden == False

Parameters:

Name Type Description Default
x torch.Tensor

Internal observation value, usually a batched tenser of feature map.

required

result()

Callback when get observe result after experiment.

You can do some postprocess of observation value here.

MinMax

🔍 Observe min/max range of tensors.

Returns:

Name Type Description
minmax tuple[float, float]

A tuple (min_value, max_value)

RMSE

🛠️ Root Mean Square Error metric between golden run and fault inject run.

Returns:

Name Type Description
RMSE float

RMSE value of fault inject impact.

SaveLast

🔍 Simply save last inference internal tensor.

This will be helpful when visualize NN feature maps.

Returns:

Name Type Description
last_tuple tuple

Last golden run activation and last FI run activation tuple (golden_act, FI_act). If no such run before get result, returns None.

MaxAbs

🔍 Observe max abs range of tensors.

Returns:

Name Type Description
maxabs float

Similar as x.abs().max() but among all inference.

MeanAbs

🔍 Mean of abs, a metric of scale of values

Returns:

Name Type Description
meanabs float

Similar as x.abs.mean() but among all inference.

Std

🔍 Standard deviation of zero-mean values.

Returns:

Name Type Description
std float

Similar as sqrt((x**2).mean()) but among all inference.

Shape

🔍 Simply record tensor shape of last inference

Returns:

Name Type Description
shape torch.Size

Shape of last input tensor.

MAE

🛠️ Mean Absolute Error between golden run and fault inject run.

Returns:

Name Type Description
MAE float

MAE metric of fault inject impact.

EqualRate

🛠️ Compare how many value unchanged between golden run and fault inject run.

Returns:

Name Type Description
rate float

A average ratio of how many values remain unchanged, between [0, 1].

  • If all value have changed, return 0.

  • If all value are same as golden run, return 1.

UniformSampling

🔍 Uniform sampling from tensors between all inference, up to 10000 samples.

Co-work well with statistical visualization requirements, e.g. plt.hist() or plt.boxplot().

Info

Since feature map in NN are usually entire large, save all feature map(e.g. use SaveLast) and sampling later is in-efficient.

This observer automatically sampling values between all inference with uniform probability.

Returns:

Name Type Description
array np.array

A 1-d numpy, its length is min(all observerd values, 10000).