Making Neural Networks Portable, and other opportunities for AI standards

The question of when standards are premature, timely, or overdue is an old one. Innovators certainly do not want to be stifled by standards enacted too early, while the technologies are still very much in flux. AI is a case in point: neural networks, and especially large language models (LLMs) are changing very quickly.

Here is one way to decide: at which point does everyone have to perform, because of the lack of standards, a large amount of work that does not add value per se? Generally, this has to do with interoperability between systems, or the interchange of data and models. If everyone is wasting their time massaging data to move it from one environment to another, in a way that does not help them be more competitive, this is when standards are valuable.

Then, our job in a standards development organization is to figure out how to let companies that develop AI systems compete through their unique innovations, while saving them time by resolving the portability and interoperability issues.

Note that this focus on interoperability leads us to standards of a rather technical nature, which will contain semantic models (including formal ontologies), languages, data formats, etc. Other organizations are addressing more general – but of course still important – concerns such as risk levels, ethical principles, biases, etc. For example, the NIST AI Risk Management Framework and the European Union AI Act address and categorize AI risk, but do not provide a technique, language, metamodel, etc., to measure risk or assess the impact of mitigation actions.
At OMG, we have been discussing whether the industry now needs, and is ready for, such standards as:

  • a formal, machine-readable AI vocabulary
  • an ontology of AI domains, applications, techniques, languages, etc.
  • a set of metadata that describes training datasets
  • a set of metadata that describes AI products
  • a common language to unify the disparate de facto image classifier descriptions that exist today
  • … and more

On the AI vocabulary front, we are collaborating (under a liaison agreement) with the IEEE Standards Association's P3123 Working Group, which is free to join, and we hope to see a standard emerge in 2026.

Our most promising effort to date is to create a standard model and exchange format for convolutional neural networks. Today, once you tune a network using some training data, you generally must execute the network on the same platform. You cannot "lift and shift" the model across tools (say, between TensorFlow and PyTorch), because there is no standard representation of the trained neural network. You are essentially locked in, or you must recreate the same network manually on another platform and redo the training from the beginning. By "representation," I mean the architecture of the network, its depth (the number of layers), its breath (the number of nodes at each layer), its topology (the connections between nodes), the activation functions executed at each node (which combine the input values to calculate the output of the node), the parameters that control those functions, or the synaptic values (the weights attached to each connection between neurons) that have been calculated during the training iterations.

Synaptic values
Synaptic values1

In March 2015, OMG issued a Request for Proposals (RFP) called PINN: Portability and Interoperability of Neural Networks. As always with OMG, the standard will not be developed internally by a committee, but proposed by one or more OMG members, and evaluated by the AI Platform Task Force. Multiple initial submissions may be combined at the discretion of their authors, retaining the best of each input, to form a smaller number of revised submissions from which we will adopt the one that best meets the requirements specified in the RFP.

We know that a couple of formats already exist to address this challenge, especially the Neural Network Exchange Format (NNEF) from the Khronos Group, and the Open Neural Network eXchange (ONNX) managed by the Linux Foundation. However, we believe that these formats are not comprehensive enough to capture all the elements of a neural network needed for true portability, and we hope that the submissions made in response to the OMG RFP will improve on those formats while considering the need for forward compatibility.

In addition to the PINN standard resolving the platform lock-in issue, it would enable a company to create a library, or repository, of trained models that people could download and install regardless of which platform they use. Essentially, this spurs the emergence of a new market.

Finally, there will be systems comprised of multiple neural networks working together. PINN will also address the need for interoperability in such a multi-network environment.

As I write this, the deadline to express an interest in this RFP – through a Letter of Intent – is approaching (end of September 2025) but it is not too late to participate, or to join other submitters and work together. Anyone who is interested should find and read the RFP at www.omg.org/public_schedule/.

While this effort is going to occupy the submitters and the Task Force for the coming months, we hope that we can also engage AI developers and end users in starting to develop similar requests for some of the other potential standards mentioned above. Don't hesitate to send questions to the co-chairs of the Task Force, Isabell Kunst and myself, at [email protected].

Claude Baudoin

About the Author

Claude Baudoin, Owner and Principal – cébé IT & Knowledge Management LLC, Co-chair, OMG Artificial Intelligence Platform Task Force

1 Sebastian, A., Pendurthi, R., Kozhakhmetov, A. et al. Two-dimensional materials-based probabilistic synapses and reconfigurable neurons for measuring inference uncertainty using Bayesian neural networks. Nat Commun 13, 6139 (2022). https://doi.org/10.1038/s41467-022-33699-7