Custom AIKP development flow
Adapting Boilerplate for Your Own AIKP
Inside of sandbox
Copy directory to your user directory (
/home/YOUR_USERNAME/OCLI/AIKP/…)Rename it to your AIKP name
Determine what parameters you wish to be configurable by the end user from the CLI
Add those configurable parameters to TASK_DEFAULT dictionary in config.py
If necessary, add checks for those configurable parameters in validate_task class method in init.py
If necessary, determine configurable parameter behaviour in task_set class method in init.py
Define your workflow through list of commands that the end user will provide to CLI in cli.py
Mount commands created in cli.py in cli_task_mount class method in init.py
On a local machine
Clone ocli repository and setup local environment.
Copy module
ocli.aikp.boilerplatewith your AIKP name (ex.ocli.aikp.NEW_NAME).Determine what parameters you wish to be configurable by the end user from the CLI
Add those configurable parameters to TASK_DEFAULT dictionary in config.py
If necessary, add checks for those configurable parameters in validate_task class method in init.py
If necessary, determine configurable parameter behaviour in task_set class method in init.py
Define your workflow through list of commands that the end user will provide to CLI in cli.py
Mount commands created in cli.py in cli_task_mount class method in init.py
AIKP Anatomy
File structure
ocli.aikp.boilerplate/
├ __init__.py # Marks the AIKP directory as a python package.
| # And defines `Template` class inherited from TemplateBoilerplate.
├ cli.py # Define all custom AIKP specific CLI commands here
| # For example `task template command_1`
| #
├ config.py # AIKP-specific configuration defaults for tasks and recipes.
| # This includes default paths and parameters used across the AIKP.
├ recipe_schema.json # JSON Schema for recipe validation.
└ template.py # Task Template class (TemplateBoilerplate) a.k.a task controller.
# which implements common task operations: validate task,
# create task, upgrade task, update recipe, ai path resolve, etc..
Configuring task properties (config.py)
TASK_DEFAULTS – a dictionary of configuration keys and their default values for your task. These are the parameters you want to be configurable via CLI. When a new task is created from your template, it will be initialized with all keys in this dictionary.
For example, in the default config.py, TASK_DEFAULTS includes:
'ai_results': '/optoss/out',
'friendly_name': "{project}/{name}/{roi_name}/",
'cos_key': "{project}/{name}/{roi_name}/",
'kind': "..."
Here, ai_results is the base directory where outputs will be stored (by default /optoss/out – you might change this if needed). friendly_name and cos_key are template strings that will be filled in with the project, task name, and ROI name, defining how the output will be identified in the UI and in cloud storage (the cos_key is often used for the object storage path, and friendly_name is what end-users see in the UI). kind is an example metadata field used by the GUI (for instance, to classify the type of analysis).
You can add any other keys here that your AIKP needs – for example, a path to an input dataset, numeric parameters like domain_min/domain_max, flags to toggle certain processing steps, etc.
Every key you put in TASK_DEFAULTS will be visible in the task’s configuration (shown with task show CLI command) and can be set by the user via task set <key>=<value>.
These defaults also propagate into the recipe (see update_recipe above).
If a default value contains placeholders like {project} or {name}, the system will typically substitute them with the actual project/task values when using them.
For instance, the default friendly_name "Project/Task/ROI/" becomes something like "myProject/myTask/myROI/".
RECIPE_DEFAULTS – a dictionary of default entries for the recipe JSON that your template will produce. The recipe is essentially a configuration for the actual processing job (it might include algorithm settings, references to input data, output definitions, etc.). In the default RECIPE_DEFAULTS, you’ll see something like:
"version": 1.0,
"type": "..."
These defaults will be included whenever a new recipe is made.
You should adjust or extend this dictionary to include any constant fields that your recipe requires by default. For instance, if your recipe JSON needs a certain structure or fields (like an empty list of results, or specific algorithm parameters with initial values), you can put those here. When the user runs task make recipe, the framework will combine these RECIPE_DEFAULTS with dynamic content from the task (via update_recipe) to generate the final recipe JSON. By having sensible defaults, you ensure the recipe starts from a valid state.
Template / Controller (template.py)
This file defines the task Template class (subclassing OCLI’s TaskTemplate) for your AIKP that reads recipe and task defaults as dictionaries from config.py.
The class sets template_recipe_conf = RECIPE_DEFAULTS and template_task_conf = TASK_DEFAULTS, so that these defaults are automatically applied when creating tasks and recipes for this template.
List of Class Methods that can be copy-pasted and adjusted/extended when needed:
update_recipe: Called each time a recipe is created (e.g. when runningtask make recipe). This method updates entries in the recipe dictionary based on the task’s configuration. In the boilerplate, it calls the base implementation and then, for example, appends a custom tag ('tag_of_choice') torecipe['tag']. You can use this method to propagate task parameters (like domain thresholds, selected options, etc.) into the recipe. For instance, you might setrecipe['domain_min'] = task.config.get('domain_min')so that the recipe JSON includes the latest domain minimum from the task config. The boilerplate shows commented examples of doing this fordomain_min,domain_max,color_map, etc. It also callsupdate_visible_metadata(...)to update metadata fields (like ensuring the recipe records the template name and project for visibility in the UI).create_task: Invoked when a new task is created (e.g. viatask create). This allows adding or computing any dynamic entries in the task’s configuration at creation time. By default it just calls the superclass to handle standard initialization.upgrade_task: Used to upgrade or migrate an existing task’s config when the template changes. By default it calls the base class’supgrade_taskand returns its result. It returns a boolean indicating if any upgrade was performed.validate_task: Called for each task parameter (each key intask.config) to check for errors or missing values. This method should return a tuple(True, errors)if validation is performed on that key, or(False, [])if the key is not handled (which means “no validation for this key”). The*errors is a list of error message strings; an empty list means the value is valid. For example, in the default code, if the key is'ai_results', it checks that the path is set, exists as a directory, and is writable – if not, it appends errors like"Required by Template"(if empty),"Not found"(if the directory doesn’t exist), or"Not writable". If the key is not one of the template’s defined keys, the method returns(False, [])to skip validation. OCLI will callvalidate_taskon each configurable key (especially those in TASK_DEFAULTS) whenever the task is saved or updated. A return value ofTrueenables validation for that key, and any errors in the list will be reported to the user. In other words,Truewith an empty error list means “valid,” and any strings in the error list indicate problems with that parameter. You can customize this method to add checks for any new task parameters you introduce (e.g. range checks or ensuring a file path exists.validate_schema: Called when a recipe is being validated (for example, aftertask make recipe). This ensures the generated recipe JSON complies with the schema. The method merges the template’s specific schema requirements into a base schema “envelope” usingdeep_update, then usesDraft7Validatorto check the recipe against it. If the recipe doesn’t match the schema,validate_schemawill report errors indicating which part of the recipe is invalid. By default, it loadsrecipe_schema.jsonfor your AIKP (identified by your module path) and validates the recipe against it. You can extend this to enforce additional constraints beyond the JSON schema if needed.get_assembler: Expected to return an assembler object associated with this template, instantiated fromassemble_recipe.py(which can be created for data assembly). In the custom boilerplate that does not use the assembler object, it simply raises an exception indicating assembly is not implemented. As a developer, you should override this to return an instance of your AIKP-specific assembler if it is needed for your AIKP. Assembler is stored inassemble_recipe.pyand it defines how to assemble input stacks into the final data product (by implementing anassemble_kernel). Input to the assembler stacks (made by the internal satellite specific Processor) or user-defined input data.task_set: Called whenever task parameters are updated via the CLI (task set key=value). This hook allows you to perform template-specific actions when certain keys are set. By default, it delegates to the base implementation after any custom handling. For example, the boilerplate (in comments) shows how to handle acolor_mapkey: if the user setscolor_map, the code can standardize the value (treat “none” or “false” asNone) and validate that the provided color map is known, then store it intask.config. After such custom logic, it callssuper().task_set(...)to handle any remaining keys normally. Developers can add similar blocks for any custom keys that need special processing when set.get_stack_path: Creates a task-specific “stack” path in the user’s directory. This typically is used to determine where to store or find the input data stack for the task. By default, it generates a slugified folder name using the project and task name. Iffull=False, it just returns that slug name; iffull=True, it returns the absolute path by joining the slug with the baseai_resultsdirectory path. In practice, this means if your task is in project “XYZ” with name “MyTask”,get_stack_path(full=True)might return something like/optoss/out/xyz_mytask(since by defaultai_resultsis/optoss/out). This path can be used to store intermediate stack data. Usually you won’t need to override this, but you can customize it if your stack files need a different directory structure.get_ai_results_path: Determines the path for the AI results directory specific to the task. It first validates the baseai_resultsdirectory (usingtask.validate_all(['ai_results'])to ensure it exists and is writable). Then it callsget_stack_pathto get the task’s slug name and similarly returns the absolute path (iffull=True) or just the slug (iffull=False) within theai_resultsbase. Essentially, bothget_stack_pathandget_ai_results_pathconstruct paths under theai_resultsfolder, butget_ai_results_pathensures the base directory is valid before returning the path. This helps catch configuration errors early. The default base path is/optoss/out(set in config.py’s TASK_DEFAULTS), but a developer or user could changetask.config['ai_results']to a different location if needed – this method will verify it.cli_task_mount: where commands defined incli.pyare mounted and become available as CLI subcommands for this task template. This method is automatically invoked by the OCLI framework to add your custom commands into the CLI.
How it works: in the default implementation, cli_task_mount imports each command function from your module’s cli.py (for example, command1, command2, etc.) and registers them under the CLI group for this template. The log message "cli_task_mount mount invoked" is printed to debug log when this happens. Once mounted, you can run those commands using the syntax task template <command> in the OCLI shell.
Here, “template” is a placeholder – in practice, it will use your template’s name or identifier (from class Template(TaskTemplate)). For example, if your AIKP module is ocli.aikp.MyAIKP, you might rename class Template(TaskMyAIKP) and run task MyAIKP command1, depending on how the CLI is structured.
Typically the template’s class or module name is used to group the commands.
This mechanism links your custom logic into the CLI: any function you decorate as a Click command in cli.py and add via cli_task_mount will be accessible to the end-user as part of the task’s workflow. Make sure to list all your command functions here so they become active in the CLI.
Implementing and Exposing Custom CLI Commands (cli.py)
In this file, you define any number of CLI commands that form your AIKP’s workflow.
Each command is typically a function decorated with @click.command(...) to specify its name, and optionally @click.option(...) to add command-line options/flags.
For example, you might have:
@click.command('process-data')
@click.option('--threshold', default=0.5, help='This is the threshold value')
@pass_task
@ensure_task_resolved
def process_data(task: Task, threshold):
# your processing code here
In this example, running task template process-data --threshold 0.5 in the CLI would execute the process_data function for the active task.
The decorators like @pass_task, @ensure_task_resolved, and @pass_repo (as seen in the boilerplate) automatically inject the current Task object, ensure the task and ROI are loaded/resolved, and provide access to the repository context (Repo) into your command function. This means within your command function you can directly use task (and, if needed, repo or others) to access configuration and data.
For instance, the example command1 in the boilerplate uses Recipe(resolve_recipe(task)) to load the task’s recipe and then prepares an output path (filenames.pred8c + '.tiff') for saving results.
You can similarly fetch task parameters via task.config.get('<param_name>') (e.g., custom_input_path) to use them in your algorithms.
When writing these commands, implement whatever analysis or calculation is needed (e.g., reading input files, performing computations, saving outputs).
It’s recommended to save any result files into the task’s AI_RESULTS directory (which you can get via task.config['ai_results']),
so that subsequent steps (like creating COGs or GeoJSONs) can find them.
After defining your commands in cli.py, ensure you import and add them in the cli_task_mount class method of init.py
so that they become available in the CLI. You can provide as many or as few commands as makes sense for your workflow,
and design them with whatever options (--flags) you want the user to control.
Recipe validation schema (recipe_schema.json)
recipe_schema.json used to validate the recipe JSON file for correctness.
JSON Schema defines what a valid recipe looks like for your AIKP. It typically specifies required fields, data types, allowed ranges, etc., for the content of the recipe.
OCLI will use this file in the validate_schema class method to ensure that when a recipe is created or loaded, it conforms to expectations (validate_schema class method merges your template’s schema with a base schema envelope and then runs validation).
For example, the base might define general fields like ROI geometry, while your template’s schema adds requirements for your specific fields.
As a developer, you could update recipe_schema.json to include any new recipe properties your AIKP uses. Maintaining an accurate schema helps catch errors early and guides users to provide correct parameters.
For instance, if you add domain_min and domain_max to the recipe, you might extend the schema with those keys (e.g., ensuring they are numbers).
Commands that would be run in CLI as a workflow
task create --template ocli.aikp.AIKP_name --roi roi_name --name task_name --activate
This creates a new task instance using your custom template. The --template ocli.aikp.AIKP_name specifies which AIKP to use (replace AIKP_name with your module’s name). --roi <roi_name> links the task to a region of interest, and --name <task_name> gives the task a unique name. --activate makes this task the active one, so subsequent task commands will apply to it by default. Internally, this command sets up a new Task with all default settings from your config.py (TASK_DEFAULTS) and prepares it for configuration.
Setting of task parameters
task set friendly_name=”tree/structure/visible/in/UI”
task set custom_input=/home/user_name/OCLI/folder/folder/dataset
task set key=value
The above task set commands allow you to adjust the task’s parameters. In this example, friendly_name is being set to a value that will be visible in the UI as a label or path, and custom_input is set to a local dataset path. The third command is a generic placeholder showing that you can set any key from TASK_DEFAULTS by providing a value.
Each time you run task set, the specified key-value is saved into the task’s config.
Under the hood, the task_set method in your Template class is invoked – this is where any special handling for certain keys would occur. After setting these, you can use task show to verify that the task config now contains these values. Also note that any values set here (friendly_name, custom_input, etc.) can be used later when building the recipe or running commands.)
task make recipe
This generates the recipe JSON for the task, combining the default structure from RECIPE_DEFAULTS with all the current task parameters. When you run this, the system will call your Template’s update_recipe method to inject task-specific values into the recipe. The resulting recipe is usually saved in the project’s recipe folder (and task show will typically show the path to the recipe). After this step, the recipe is ready to be executed or further refined with custom commands.
Custom workflow
– defined with commands and algorithms added in cli.py. This is where you add your analysis/algorithm calculations in as many or as few commands as needed (with as many or as few extra options as you want the user to be able to use).
All outputs or results generated by these commands should be saved to files (or task config) in the AI_RESULTS/<task_slug> directory so that they persist and can be used in subsequent steps.
For example, after the recipe is created, you might run one or more custom processing commands to produce intermediate or final results:
task template command1
task template command2 --option1
For instance, if you wrote a command @click.command('process-data') in cli.py, you would run task template process-data. Each custom command may perform a part of the workflow – e.g., downloading data, running a model, computing statistics – and you can chain them as needed.
It is possible that output of some of the commands is a task parameter and the recipe needs to be re-written with this new data.
For the simplest example, when inputting a single band raster, one of the custom commands is to determine domain minimum and domain maximum. After the command is performed, determined domain minimum and maximum populate task parameters and can be (a) accepted by the user or (b) overwritten. Keep in mind that domain minimum and domain maximum are also present as TASK_DEFAULTS in config.py.
Since there is a change in the task parameters after the recipe has been created in previous steps, it is necessary to override it with the new version.
For example:
task template set-domain
This might be a custom command that calculates domain min/max. When run, it prints the suggested domain_min and domain_max to the screen and also updates the task’s config with those values (so task show will now list domain_min and domain_max values)
task set domain_min=123 domain max=321
Optional: If the user wants to override the automatically calculated domain limits, they can manually set new values like this.
task make recipe --override
After updating those parameters, this command regenerates the recipe, overriding the previous recipe content. The --override flag tells the system to not create a new recipe file but to update the existing one for this task. This will call update_recipe again, now including the new domain_min and domain_max values from the task config. The resulting recipe JSON now reflects the updated parameters (ensuring that subsequent processing or publishing uses the correct values).
Creation of output
Creation of lightweight COG from created/calculated and saved output file in previous steps from AI_RESULTS directory (defined in ocli/ai/export_tools.py) and associated GeoJSON metadata
ai makecog full
This command creates a Cloud Optimized GeoTIFF (COG) from your output data. Essentially, it will take the result (for example, the TIFF or image produced by your custom commands in AI_RESULTS) and convert it into a COG format for efficient cloud access and tiling.
ai makegeojson --cos-key="+_details" --friendly-name="+details"
This generates a GeoJSON file containing metadata for your task’s output.
ai upload
Uploads the generated content (the COG and GeoJSON, and the recipe) to the platform’s storage. This step takes the files from AI_RESULTS and pushes them to the remote cloud storage or database. After a successful upload, the data is stored in the cloud (e.g., in an object storage bucket) under the paths defined by your cos_key (which by default was derived from project/name/roi).
ai publish post
Publishes the task’s results to the UI. This final step typically makes the new data visible and accessible in the platform’s web interface.