So Ive been doing backend dev for about 8 years mostly focused on Java and some Go and usually I can pick up a new framework in a weekend but man this new project has me spiraling. My company just landed a contract for a custom predictive maintenance tool for a factory in Ohio and the deadline is in four months which is basically yesterday in dev time. I started looking into the AI side of things and everything says Python Python Python but then I see stuff about C++ being used for the heavy lifting and now people are talking about Mojo? Im actually really stressed about picking the wrong foundation and having the whole thing crawl to a halt once we scale up. Ive played around with some scikit-learn basics but this project is gonna need some serious neural net work and maybe some real-time computer vision if the client gets their way.
I really dont want to spend three weeks learning a stack that ends up being a dead end for this specific use case. Whats the actual industry standard right now if you exclude the hype?
Honestly man, with a four month deadline, you should probably stay away from anything experimental like Mojo or Julia. They are cool but the ecosystem just isnt there yet for a high-stakes commercial project. Stick with Python because thats where all the support is... if you hit a wall, you can find an answer in seconds. Python is fast enough because it is basically just a wrapper for C++ anyway when you are using the big libraries. I would suggest being very careful about trying to optimize too early. You might want to consider these safety tips:
> Is Python actually fast enough for production-level inference or am I gonna regret not starting with something lower level? The secret is that Python is basically just a management layer. All the heavy math happens in C++ or CUDA under the hood anyway. For your production inference, you can usually stick with Python for the logic and use something like NVIDIA TensorRT SDK or ONNX Runtime to handle the execution. It optimizes the model graph so it runs way faster than raw Python ever could. Since you're on a 4-month crunch, dont waste time on Rust or C++ for the whole build. Just use Python for training and then export your models. If the real-time vision part gets heavy, look into Luxonis OAK-D Pro PoE Camera because it handles the AI processing right on the hardware, which saves your backend from choking. It basically offloads the vision pipeline so your main CPU doesnt have to sweat it. About the math... honestly, you just need to understand linear algebra basics like matrix dimensions and dot products. If you can visualize how data flows through a layer, you're fine. You dont need to manually derive backpropagation anymore. Library docs are usually enough to get by once you understand the shapes of your data. Given your Java background, you might find Deep Java Library DJL interesting for the deployment phase if you want to keep the production environment in a language you already know well. It lets you run engines like PyTorch or TensorFlow within a JVM environment. Its pretty solid for enterprise stuff where you need stability.