MODDIV - Protection of Machine-Learned Models

Our project MODDIV investigates two continuations of the results in projects SOFTDIV and PUFFIN, and the second COMET-Modul at SCCH S3AI (https://www.s3ai.at/) to generalize the protection of arbitrary programs to the protection of data, specifically of machine-learned models. A special case of software results from deep learning models, which play an important role in Embedded AI. Deep neural networks solve complex pattern recognition problems with human-like accuracy. It is interesting that there is great flexibility in tuning such systems with many equally good solutions and robustness w.r.t. tiny noise in the weights. On the other hand it can be demonstrated that purposefully constructed noise added to the input data can easily fool and manipulate such neural networks. In addition to standard software diversity approaches, these effects open up new possibilities to construct software diversity in a model-driven fashion, and to implant subtle undesirable effects in case of exfiltration attacks.