Use the right framework for AI Development – Python Scikit-learn

This is the continue of frameworks-for-machine-learning-how-to-get-started-in-ai-development

However, if you want to work on the code of the application without a graphical user interface and integrate the finished model into an application, you have to use a programming language. Thankfully, almost all popular programming languages ​​are suitable for developing machine learning applications. There are hardly any restrictions and the communities around these programming languages ​​are usually very fast when it comes to integrating new libraries. With no special preferences, Python is a language to choose. Why Python? Python is a popular language in the scientific world, with many algorithms and many frameworks for machine-learning applications first available there.

Python – Scikit-learn

A simple framework for getting started with Python is Scikit-learn. This is possible with a few lines of code much. Thus, after a quick glance at the page of the Scikit-learn example projects, a simple classifier can be implemented, which can be trained with the concerned data. Scikit-learn is suitable for beginners because it already has many metrics for determining the quality of the model and it also offers very good options for visualization.

In order to be able to work with the data at all, features (features in the data) have to be extracted first. Who works with texts, for example, wants to see if a word occurs or not. You could also try to reduce the words to the root word and see if that word stem appears in the text. The occurrence of certain special characters can also be recorded as a feature – depending on which problem you want to solve and what the data looks like. When working with images, an edge detection (Canny algorithm or Sobel operator) could be used to detect what is shown in the image.

Beginners can also use frameworks to extract these features. They help to get the features out of the data. The features then become vectors that you can put into the algorithm. The process is called Feature Extraction. When extracting features from the text, the frameworks NLTK and Spacy help. They do the job of feature extraction, for example, by estimating which words in the text are places, people, and specific entities or the type of the word. There are also frameworks for extracting features from images. Scikit-image and OpenCV are suitable for this. This can be used to create edge structures and histograms from the images, which are very good as features. These frameworks are recommended for beginners, as they are usually well documented and many examples of use are included. Anyone outside of Python can also check out the Weka Framework for Java or Mlpack for C.

Is it even deeper?

Now that simple problems are solved, interested parties can devote themselves to more complicated applications. In addition to the so-called “shallow-neural networks” (simple artificial neural networks with simple activation functions), there are even more complex neural networks. Deep learning is the discipline in which these networks have multiple layers and different types of layers. These networks have shown that they achieve a much higher accuracy for a complex input (a picture or a text) than conventional algorithms. This works so well, among other things, because the feature engineering here is taken over by auto-encoders, which extract a lot of information from the input medium.

That sounds exciting?

It is. However, the training of models through deep learning also brings some problems. Because of the complexity many parameters have to be optimized more than with conventional methods, significantly more data is needed. So you have to get these in advance. In addition, developers have to put much more time into the architecture of the network. Deep learning networks have a higher number of layers as well as a variety of layer types. Of course, optimization offers much more room for improvement and change in this approach, but it is also more complex and difficult to handle.

If you still want to experiment with deep learning, you should take a look at Tensorflow and Keras . With these two libraries can build deep learning application. But beware, first the machine learning basics should be clear before working with these two open source libraries. This ensures that you have a rough idea of ​​what is hidden behind the few lines of code needed for a working application.

Conclusion

This article has given a brief insight into the world of machine learning. What the article has not shown is the difficulty of interpreting the results of a model well. This should be done very carefully, because after training the model you have to rely on the training and test data being sufficient to keep the application working even after deployment in a productive environment.

What is necessary for this? Good documentation is important so that any developer who works on the machine learning project, currently and in the future, has a good idea of ​​the resulting model. In addition, a review process was to be established around the emergence of the machine learning model. Frequently unnoticed a bias in the data or the process creeps in, so the training becomes very one-sided and develops unwanted tendencies. A review process with a developer or external representative could prevent these errors and negative effects.

Equally important, however, is to make the results and the process interpretable and understandable so that the optimized algorithms do not have any negative effects on other people or companies.

We read it on T3N

Recent Articles

spot_img

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here