AI and R&D

I usually always have a development project in mind, through which I explore different technologies with my hands deep in the mud. That's also the case at the moment. This time, the idea is to implement an application (web, iPhone, Android) where users can save their own objects of interest (e.g. news, articles, etc.) on their own device and analyze them either combined or individually against different AI models (summary of selected ones, hate speech analysis of a single object or combination of objects, etc.). For this purpose, I have mainly studied two different areas in more detail:

- application implementation technologies
- use of pre-trained AI models in different environments

Application implementation technologies

On the user interface side, I don't particularly like the placement of different components and adjusting their properties at the code level. That's why I've been interested in various visual design tools. When evaluating these, I've paid special attention to the fact that you don't get "tool-trapped" with them. In other words, the tool should always have the possibility to implement things purely at the code level. In addition, I want the tool to be purely for UI development. The UI is just the user interface and all business logic, data security, etc. are ultimately handled in the REST API interfaces. I currently use two different applications to implement user interfaces: Bootstrap Studio and FlutterFlow.

Bootstrap Studio
This is a website building tool that makes it possible to easily create websites that scale to different screen sizes. In addition, the tool makes it possible to create PWA (Progressive Web Application) applications, which are in a way somewhere between a normal application and a web application. For example, this site was built with Bootstrap Studio.

Bootstrap Studio produces bootstrap html, which has built-in scaling for different screen sizes. Communication with the REST API is done using javascript. Bootstrap Studio implementations do not have UI state management, which is implemented in more advanced web development platforms such as React, Angular, etc., so the developer must implement it when using this tool.

FlutterFlow
Flutter is an application development framework developed by Google that allows you to produce user interface applications from a single code for iOS, Android and web platforms. In addition to these, Windows and Mac applications can also be implemented with Flutter. The programming language in Flutter is dart.

FlutterFlow is a visual user interface development tool that allows you to implement Flutter-based user interfaces (Web, Android, iOS) without coding. Embedding your own code (dart) in applications is easy and this was one of the significant criteria for me when choosing tools. Unlike Bootstrap Studio implementations, FlutterFlow can be used to produce native applications and web applications can use built-in state management.

In this R&D project, I will probably use FlutterFlow for the user interface implementation.

Use of pre-trained AI models in different environments

The starting point here was to find ways to use pre-trained AI models via REST APIs. There are many ready-made models available, for example, in Azure, AWS and Huggin Face (open source models).

Python is a widely used language for training and using various models, and this support is found in both Azure and AWS. In addition, using them on your own machine is also possible. Python implementations generally use Jupyter notebooks and these can be used in Azure, AWS and on your own machines. In addition to these, Microsoft offers an ML.NET AI implementation, which has its own visual tool for implementing and using models. This is used in .NET environments. These models can also be converted to the Python world via ONNX.

Both Azure and AWS offer the possibility of using Huggin Face models in their "AI ecosystem" and in quick tests, using them seemed to work better in AWS's SageMaker and Bedrock environments. A common feature of these environments is that they offer a "ready-made stack" of models for use via REST API interfaces.

After various evaluation experiments, I will probably use Huggin Face models on my own machines either in AWS or Azure. The models are used either through AWS's lambda or Azure's Functions implementations. Possible scalability and user rights, etc. data security are handled by Azure or AWS tools, through which a connection is established to the Huggin Face models provided via REST API in my own Linux instances (eg. EC2).

If there is a need to train my own models, I will probably do it with Keras,