Three elements of a network: structure, algorithm, weight Once the network model is selected, the structure and algorithm in the three elements are determined, and then the weights are adjusted. The neural network sends a set of training sets to the network, and adjusts the weights based on the difference between the actual output of the network and the expected output. Steps to train the model: Select a sample of the sample set (Ai Bi) (data tag) Send to the network and calculate the actual output Y of the network (at this time, the weights in the network are random) Calculate D=Bi -Y (difference between predicted and actual values) Adjust the weight matrix W according to the error D The above process is repeated for each sample until the error does not exceed the specified range for the entire sample set. Neural network framework Caffe is an open source software framework. With this framework, we can implement new networks, modify existing neural networks, train networks, and write network usage. Implement a new network 1 data packaging 2 Write a network structure file 3 Write a network solver file 4 Start training Caffe's file structure Data is used to store downloaded training data For example, there will be mnist ilsvrc12 cifar10 after installation. Docs example Help documentation and code samples used Direct use of the caffe neural network framework is more complex, you can use NVIDIA's DIGIST visualization tools for network training. The TensorRT network framework adopts the strategy of "precision speed change". Under the premise that the accuracy is not significantly reduced, its acceleration of inference is obvious, and there is often more than 1 times performance improvement. It can be converted according to the prototxt file and caffemodel weight. To support new models with half precision. Its workflow: Build: build a network, optimize Execution: Engine running reasoning task Training is for deep learning in order to obtain a model with excellent performance. The main focus is on the accuracy of the model and other indicators. The reasoning is different. It has no reverse iterative process in training. It is predicting new data, and the AI ​​services used in our daily life are all inference services. Compared to training, the focus of reasoning is different, which also brings new challenges to the current technology: Reasoning is more concerned with high throughput, low response time, low resource consumption, and easy deployment process, while TensorRT is a deployment-level solution to solve the challenges and impacts of reasoning. Butt Connector,Lugs Insulated Female Connectors,Insulated Female Connectors,Non-Insulated Spade Terminals Wire Connector Taixing Longyi Terminals Co.,Ltd. , https://www.longyiterminals.com
Neural network training steps and deployment methods
training