Improve AI is a machine learning platform for quickly implementing app optimization, personalization, and recommendations for iOS, Android, and Python.
The SDKs provide simple APIs for AI decisions, ranking, scoring, and multivariate optimization that execute immediately, on-device, with zero network latency.
Add JitPack in your root build.gradle at the end of repositories
allprojects {
repositories {
...
maven { url 'https://jitpack.io' }
}
}
Add the dependency in your app/build.gradle file
dependencies {
implementation 'com.github.improve-ai:android-sdk:7.2.0'
}
Add default track url to your AndroidManifest.xml file:
// The track url is obtained from your Improve AI Gym configuration.
<application>
<meta-data
android:name="ai.improve.DEFAULT_TRACK_URL"
android:value="https://xxxx.lambda-url.us-east-1.on.aws/" />
</application>
Load the model:
public class SampleApplication extends Application {
@Override
public void onCreate() {
super.onCreate();
// The model url is obtained from your Improve AI Gym configuration
String modelURL = "https://xxxx.s3.amazonaws.com/models/latest/greetings.xgb.gz";
DecisionModel.get("greetings").loadAsync(modelUrl);
}
}
The heart of Improve AI is the which() statement. which() is like an AI if/then statement.
greeting = DecisionModel.get("greetings").which("Hello", "Howdy", "Hola");
which() takes a list of variants and returns the best - the “best” being the variant that provides the highest expected reward given the current conditions.
Decision models are easily trained with reinforcement learning:
if (success) {
DecisionModel.get("greetings").addReward(1.0);
}
With reinforcement learning, positive rewards are assigned for positive outcomes (a “carrot”) and negative rewards are assigned for undesirable outcomes (a “stick”).
which() automatically tracks it’s decision with the Improve AI Gym. Rewards are credited to the most recent tracked decision for each model, including from a previous app session.
Unlike A/B testing or feature flags, Improve AI uses context to make the best decision for each user. On Android, the following context is automatically included:
Using the context, on a Spanish speaker’s device we expect our greetings model to learn to choose Hola.
Custom context can also be provided via given():
greeting = greetingsModel.given(Map.of("language", "cowboy")).which("Hello", "Howdy", "Hola");
Given the language is cowboy, the variant with the highest expected reward should be Howdy and the model would learn to make that choice.
Ranking is a fundamental task in recommender systems, search engines, and social media feeds. Fast ranking can be performed on-device in a single line of code:
rankedWines = sommelierModel.given(entree).rank(wines);
Note: Decisions are not tracked when calling rank(). which() or decide() must be used to train models for ranking.
Scoring makes it easy to turn any database table into a recommendation engine.
Simply add a score column to the database and update the score for each row.
scores = conversionRateModel.score(rows);
At query time, sort the query results descending by the score column and the first results will be the top recommendations. This works particularly well with local databases on mobile devices where the scores can be personalized to each individual user.
score() is also useful for crafting custom optimization algorithms or providing supplemental metrics in a multi-stage recommendation system.
Note: Decisions are not tracked when calling score(). which(), decide(), or optimize() must be used to train models for scoring.
Multivariate optimization is the joint optimization of multiple variables simultaneously. This is often useful for app configuration and performance tuning.
config = configModel.optimize(Map.of(
"bufferSize", [1024, 2048, 4096, 8192],
"videoBitrate", [256000, 384000, 512000]);
This example decides multiple variables simultaneously. Notice that instead of a single list of variants, a mapping of keys to arrays of variants is provided. This multi-variate mode jointly optimizes all variables for the highest expected reward.
optimize() automatically tracks it’s decision with the Improve AI Gym. Rewards are credited to the most recent decision made by the model, including from a previous app session.
Variants and givens can be any JSON encodable object. This includes Integer, Double, Boolean, String, Map, List, and null. Nested values within collections are automatically encoded as machine learning features to assist in the decision making process.
The following are all valid:
greeting = greetingsModel.which("Hello", "Howdy", "Hola")
discount = discountModel.which(0.1, 0.2, 0.3)
enabled = featureFlagModel.which(true, false)
item = filterModel.which(item, nil)
themes = Arrays.asList(
Map.of("font", "Helvetica", "size", 12, "color", "#000000"),
("font", "Comic Sans", "size": 16, "color", "#F0F0F0"));
theme = themeModel.which(themes)
It is strongly recommended to never include Personally Identifiable Information (PII) in variants or givens so that it is never tracked, persisted, or used as training data.
The mission of Improve AI is to make our corner of the world a little bit better each day. When each of us improve our corner of the world, the whole world becomes better. If your product or work does not make the world better, do not use Improve AI. Otherwise, welcome, I hope you find value in my labor of love.
– Justin Chapweske