Parslet Recipes: Real-World Examples
The best way to learn Parslet is by seeing it in action. This guide breaks down the example workflows found in the examples/
directory. Each example is a self-contained “recipe” designed to showcase a specific feature or use case.
1. The “Hello World” of Parslet
File:
examples/hello.py
Use Case: This is the quintessential introductory example, perfect for verifying your installation and understanding the absolute basics of Parslet. It demonstrates how to define tasks and chain them together.
How It Works: The workflow performs a series of simple arithmetic operations.
Two
add
tasks are defined to run in parallel.Two
square
tasks each take the result (theParsletFuture
) of one of theadd
tasks.A final
sum_results
task takes the outputs of bothsquare
tasks and adds them together. This creates a small, diamond-shaped dependency graph.
How to Run It:
parslet run examples/hello.py
Expected Output: The final result of the calculation,
100
, will be printed to your console.
2. The Offline Photo Filter
File:
examples/image_filter.py
Use Case: Demonstrates a simple media processing pipeline that can run entirely offline on a device like a phone or Raspberry Pi. This is useful for pre-processing images before uploading them or for batch-editing tasks in areas with no connectivity.
How It Works: This recipe uses the
Pillow
library to perform a series of image manipulations.load_image
: Loads an image file from disk.convert_grayscale
: Converts the image to black and white.apply_blur
: Applies a Gaussian blur effect.save_image
: Saves the final processed image back to the disk.
How to Run It:
First, you need an input image. Create a dummy PNG file named
input.png
in the project’s root directory. Any small PNG image will work.Run the workflow from the command line:
parslet run examples/image_filter.py
To See the Workflow: You can also ask Parslet to export a visual diagram of the pipeline:
parslet run examples/image_filter.py --export-png pipeline.png
This will create a file named
pipeline.png
showing the four steps connected in a line.
3. The Offline Text Cleaner
File:
examples/text_cleaner.py
Use Case: A classic data cleaning workflow that takes a messy text file, normalizes it, and extracts useful information. This is a common first step in many natural language processing (NLP) or data analysis tasks, and Parslet allows you to do it on-device.
How It Works:
load_text_file
: Reads the content of a text file.convert_to_lowercase
: Normalizes all text to be lowercase.remove_punctuation
: Strips out all punctuation marks.count_words
: Calculates the frequency of each word.save_word_counts
: Saves the resulting word counts as a JSON file.
How to Run It:
Create a file named
input.txt
in the project’s root directory. Add some sample text, for example:Hello Parslet! This is a test, a simple test.
Run the workflow:
parslet run examples/text_cleaner.py
Expected Output: A file named
word_counts.txt
will be created with the JSON results, and a diagram namedtext_cleaner_dag.png
will be generated.
4. The Video Frame Extractor
File:
examples/video_frames.py
Use Case: A tool for video analysis that extracts every individual frame from a video file and saves them as images. This is a foundational step for tasks like object detection in video, motion analysis, or creating GIFs.
How It Works: This workflow uses the
OpenCV
library.extract_frames
: Opens a video file, reads it frame by frame, and saves each frame as a separate PNG file in aframes/
directory.count_frames
: Counts the total number of frames that were extracted.
How to Run It:
You need a video file. For testing, you can use the sample video provided by the utility script. The
run_all_examples.py
script can generate this for you, or you can create your ownvideo.mp4
.Run the workflow, pointing it to your video file:
# Make sure you have a video.mp4 file in your root directory parslet run examples/video_frames.py --video video.mp4
Dependencies: This example requires
opencv-python
andnumpy
, which you can install withpip install opencv-python numpy
.
5. The RAD-Parslet Medical Imaging Pipeline
Files:
examples/rad_parslet/
,examples/rad_pipeline.py
Use Case: This showcases the “Radiology AI Diagnostics (RAD) by Parslet” concept. It simulates a real-world medical imaging workflow for a remote clinic where a technician can get a preliminary analysis of a scan on a local, offline device.
How It Works:
The
rad_parslet.py
module contains the core logic, including two “tiny” mock AI models (TinyMeanModel
,TinyStdModel
) that produce a diagnosis based on simple image statistics.The
analyze
function runs both models on an image and compares their outputs. If they agree with high confidence, a diagnosis is made. If not, the image is flagged for human review.The
rad_pipeline.py
script wraps thisanalyze
function in a@parslet_task
so it can be run by the Parslet engine.
How to Run It:
# This runs the simplified pipeline wrapper parslet run examples/rad_pipeline.py
Expected Output: A new directory
rad_pipeline_results/
will be created containing:diagnosis.txt
: The final diagnosis (“normal”, “abnormal”, or “REVIEW_REQUIRED”).review_flag.txt
:true
orfalse
.meta.json
: A file with metadata about the model versions and their agreement score.
6. Edge MCU Sensor Data Processor
File:
examples/edge_mcu_sensor_processing.py
Use Case: Simulates a workflow for an Internet of Things (IoT) scenario. Data is collected from a low-power sensor (like a microcontroller unit), cleaned, and analyzed for anomalies. This is typical for applications in agriculture, environmental monitoring, or industrial automation.
How It Works:
generate_sensor_data
: Creates a list of fake sensor readings with random noise and occasional spikes.smooth_data
: Applies a simple moving average to clean up the noise.detect_anomalies
: Identifies points where the data changes too abruptly.save_results
: Saves the cleaned data and a list of anomalies to a timestamped directory.
How to Run It:
parslet run examples/edge_mcu_sensor_processing.py
Expected Output: A new directory will be created in
Parslet_Results/
containingclean_data.csv
andanomalies.json
.