Quick Start Guide

Get GeoLift running quickly through the installed geolift CLI.

Installation

Option A: Install from a built artifact

Use this path if you are consuming a local release candidate or a maintainer- provided build outside the repo tree.

python -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install /path/to/geolift-1.5.0-py3-none-any.whl
# or:
# pip install /path/to/geolift-1.5.0.tar.gz

Built artifacts install the package and the geolift CLI. They do not include the repo’s data-config/, recipes/, or shapemap/ directories, so packaged usage should point to your own YAML config files and input data.

Option B: Install from a source checkout

Use this path if you want the shipped example configs under data-config/ or you are developing locally.

git clone https://github.com/your-org/geolift.git
cd geolift

python -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
pip install -e .

Optional GPU acceleration for the power stage:

pip install cupy-cuda12x

Your First Analysis

Step 1: Choose your config source

Packaged-install example:

geolift infer --config /path/to/geolift_analysis_config.yaml --create-plots

Source-checkout example with shipped demo configs:

GeoLift ships canonical example configs under data-config/ in the source tree:

  • power_analysis_config.yaml
  • donor_eval_config.yaml
  • geolift_analysis_config.yaml

For the full pipeline, point geolift pipeline at any one canonical YAML file from that directory. The recommended anchor is the inference config:

geolift pipeline --config data-config/geolift_analysis_config.yaml

Step 2: Inspect Outputs

The pipeline writes:

  • outputs/multicell_power_analysis/
  • outputs/multicell_donor_eval/
  • outputs/multicell_geolift_analysis/
  • outputs/geolift_pipeline_report.md
  • outputs/geolift_pipeline_report.html

If you override the pipeline root:

geolift pipeline \
  --config data-config/geolift_analysis_config.yaml \
  --output-dir results/demo_run

GeoLift preserves the same legacy stage/report names under results/demo_run/.

Step 3: Run Individual Stages When Needed

geolift power --config data-config/power_analysis_config.yaml --use-gpu --jobs -1
geolift donors --config data-config/donor_eval_config.yaml --jobs -1
geolift infer --config data-config/geolift_analysis_config.yaml --create-plots

Useful pipeline selectors:

geolift pipeline --config data-config/geolift_analysis_config.yaml --only-inference
geolift pipeline --config data-config/geolift_analysis_config.yaml --skip-power
geolift pipeline --config data-config/geolift_analysis_config.yaml --no-report

Command Notes

  • --config is required on every command.
  • On pipeline, --config is a config-directory anchor through one canonical YAML path, not a separate pipeline schema.
  • --jobs is meaningful for power and donor evaluation.
  • --use-gpu affects the power stage only.
  • --data, --create-plots, and --no-create-plots are infer-only flags.

Compatibility Note

python runme.py still works as a compatibility wrapper. The preferred path is:

geolift pipeline --config data-config/geolift_analysis_config.yaml

Legacy recipe scripts remain usable for migration, but they are not the primary quick-start path anymore.

Next Steps