IT professionals who want to become AI engineers need to learn new technologies. This post summarizes the languages and frameworks that I’ve started to look into recently.
As a developer I’ve learned several languages, frameworks, and libraries. When starting to gain an understanding of new technologies, sometimes it can be overwhelming since many skills need to be acquired at the same time. This can lead to a typical behavior of trying to take shortcuts, for example, by utilizing known techniques rather than learning the ones established in the community. While this might work in the short-term, you need to upskill eventually for multiple reasons:
- When working with other AI engineers, using the same languages and tools makes the team more efficient.
- The latest and greatest functionality is available in standard formats first before they might be converted for other frameworks.
- When searching for help, you’ll find the best information in the standard formats used by the community.
Python is the de-facto standard language for everything related to AI. Here are some of the main concepts:
- Easy to learn
- Attractive for Rapid Application Development
- Specializes on handling data structures
- Emphasizes readability via short code
The following snippet demonstrates how easy it is to define compound datatypes and how to loop through them.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 >>> fruits = ['Banana', 'Apple', 'Lime'] >>> loud_fruits = [fruit.upper() for fruit in fruits] >>> print(loud_fruits) ['BANANA', 'APPLE', 'LIME'] >>> list(enumerate(fruits)) [(0, 'Banana'), (1, 'Apple'), (2, 'Lime')] >>> numbers = [2, 4, 6, 8] >>> product = 1 >>> for number in numbers: product = product * number >>> print('The product is:', product) The product is: 384
PyTorch and TensorFlow are the standard deep learning frameworks. PyTorch was created by Meta, TensorFlow by Google.
Here is the description of PyTorch.
PyTorch enables fast, flexible experimentation and efficient production through a user-friendly front-end, distributed training, and ecosystem of tools and libraries.
With PyTorch models, for example transformers, can be developed and compiled to static representations.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 import torch class MyModule(torch.nn.Module): def __init__(self, N, M): super(MyModule, self).__init__() self.weight = torch.nn.Parameter(torch.rand(N, M)) def forward(self, input): if input.sum() > 0: output = self.weight.mv(input) else: output = self.weight + input return output # Compile the model code to a static representation my_script_module = torch.jit.script(MyModule(3, 4)) # Save the compiled code and model data so it can be loaded elsewhere my_script_module.save("my_script_module.pt")
TensorFlow is another great framework with advantages like mobile support. Personally I prefer PyTorch for two reasons:
- There are 120399 transformer based models implemented with PyTorch on Hugging Face compared to 9465 with TensorFlow.
- While both frameworks are open source, it feels like PyTorch is ‘more open’ in terms of governance.
Originally PyTorch was primarily used by researchers only, but this seems to have changed over the last months/years.