Does Infer.NET scale to large models/data sets?
Infer.NET has been designed from the ground up to be computationally efficient. The compiler architecture means that the generated inference code often approaches the efficiency of hand-written code. Infer.NET also supports batch-processing of large datasets by sharing variables between models and you can implement customised message operators to overcome particular performance bottlenecks. However, there will always be cases where hand-coded solutions can exploit special cases to improve efficiency. If you have an example where Infer.NET generated code is significantly less efficient than hand-written code, please let us know. Note that the model compiler in the beta is not itself particularly efficient. We have focused our efforts so far on making the generated code efficient, rather than the generation process itself. Hence you should ensure that you are only invoking the compiler once or, at most, a small number of times (i.e. not inside a loop). Methods for doing this are presente