Health Scanner banner

Health Scanner

Scan thermometers, BP and glucose monitors, or eye images. Build a medical document to share with your doctor.

What are you capturing?

Camera

Click "Start camera" to use your device camera.

Captured

Captured items

Add one or more scans, then build your document.

Medical Image Viewer

View ultrasound, CT, MRI, X-ray, and other medical imaging scans. Drag and drop a file or use the file picker. Tools: scroll, window/level, zoom/pan, draw, filters.

After you open a scan in the viewer, you can save that same scan to your Lab Report folder in Documents.

Device Integrations

Live Tracking

Connect your wearable devices natively — no third-party middleware. Fitbit and Garmin use direct OAuth. Apple Health and Health Connect sync via the Wellyfy mobile app. Bluetooth devices connect directly from your browser.

Apple Watch

Not connected

Heart rate, blood oxygen, ECG, respiratory rate, sleep, steps, VO2 max, temperature, and more. Data syncs from your Apple Watch via the Wellyfy iOS companion app using native HealthKit.

Heart RateSpO2ECGSleepStepsVO2 MaxTemperature

Fitbit

Not connected

Heart rate, SpO2, sleep stages, active zone minutes, steps, stress management score, and skin temperature — connected via direct Fitbit OAuth2.

Heart RateSpO2Sleep StagesStepsStressSkin Temp

Garmin

Not connected

Heart rate, pulse ox, respiration, stress, Body Battery, sleep, steps, and advanced running dynamics — connected via direct Garmin OAuth.

Heart RatePulse OxStressBody BatterySleepSteps

Health Connect

Not connected

Heart rate, steps, sleep, blood oxygen, and body temperature from apps and devices that write to Health Connect. Sync via the Wellyfy Android app — no account required.

Heart RateStepsSleepSpO2Temperature

Real-time Health Data

No devices connected

No live data yet

Connect a device from the "Connect Devices" tab to start streaming health data in real time.

Pair a Bluetooth health device

Use Web Bluetooth to connect directly to BLE health devices from this browser. Supports heart rate monitors, pulse oximeters, and blood pressure cuffs that broadcast standard GATT profiles.

    Manually enter readings from your Apple Watch or other wearable device.

    Your medical document

    Download as PDF

    Understand your health data like a clinician

    See how each data point on this page is used in real-world diagnosis, how AI models interpret it, and how you could even bring your own models for research.

    Blood pressure & vitals Glucose & metabolic health Eye & document imaging Medications & prescriptions Stethoscope & heart sounds
    Health AI Lab
    Clinician signals
    • Trend in blood pressure over months
    • Patterns in glucose around meals
    • Eye images for diabetes/HTN changes
    • Medication history & interactions
    • Heart sounds (murmurs, extra beats)
    How AI reads this
    • Transforms numbers into trajectories
    • Compares against population ranges
    • Looks for risky combinations of factors
    • Assigns confidence, not diagnoses
    • Always meant to support clinicians
    Educational only No self-diagnosis Clinician-first design

    1. What each data type tells clinicians

    Blood pressure & heart rate

    Clinicians look at systolic/diastolic values over time, morning vs evening readings, and how quickly numbers change with medication or stress. Persistent readings above guideline ranges, especially with symptoms (headache, vision changes, chest pain) are more concerning than a single high number.

    Glucose & metabolic panels

    Finger-stick and lab glucose values are interpreted relative to fasting vs post-meal, medications, and A1c. Sequences of mildly elevated values can matter more than a single spike. AI models often treat these as time series and look for patterns that suggest increased risk rather than a diagnosis.

    Eye images & scanned reports

    Fundus images, OCT, and scanned PDF reports are parsed for keywords (e.g. “retinopathy”), measurements (cup-to-disc ratio, thickness), and impressions written by specialists. AI systems focus on structured patterns (lesions, bleeding, vessel changes), but final interpretation remains with the eye specialist.

    Medications, prescriptions, allergies

    Medication lists help clinicians understand what conditions are being treated, potential interactions, and adherence issues. AI tools often use this as context: combining drug classes, doses, and timing with vitals to understand why numbers look the way they do.

    Heart sounds & stethoscope snapshots

    Audio from SuperCAPE or digital stethoscopes can show murmurs, extra heart sounds, or rhythm irregularities. AI models typically convert the sound to a spectrogram and then detect patterns; they suggest possibilities (e.g. “murmur-like sound present”) but cannot confirm structural heart disease without imaging and specialist review.

    2. How AI models use your data

    High-level view of what happens when you click “Analyze” on this page.

    1. Pre-processing: The system normalizes units (mmHg, mg/dL), parses text from images (OCR), and removes identifiers from reports before analysis.
    2. Feature extraction: For numbers, this includes averages, variability, and trends. For images and audio, this includes shape, texture, and frequency features.
    3. Model reasoning: A large language or vision model turns those features into explanations in plain language: what looks reassuring, what might need attention, and when to talk to your clinician.
    4. Safety layer: Outputs are constrained to be educational, avoid specific diagnoses, and explicitly remind you to seek professional care.

    On this site, AI is used to help you organize and understand your records, not to make independent treatment decisions.

    3. Research mode & your own models

    For advanced users and researchers, this section describes a possible workflow for using your own models. This is an educational blueprint – not enabled by default.

    • Step 1 – Choose a data type: e.g. blood pressure time series, glucose curves, or stethoscope audio spectrograms.
    • Step 2 – Build a dataset: export anonymized readings (e.g. CSV with timestamps, values, medications) and, with ethics approval if needed, label them for outcomes (e.g. “clinic escalated treatment”, “referral to cardiology”).
    • Step 3 – Train a model: use tools like Python, PyTorch, or AutoML services to train a model that predicts risk scores or flags patterns. Focus on calibration and fairness rather than raw accuracy.
    • Step 4 – Integrate via an API: expose your model behind a secure API endpoint that accepts structured JSON. This page could send anonymized features instead of raw identifiers.

    Bring-your-own-model (BYOM) concept

    In a future release, this section can allow you to configure a custom endpoint per data type (e.g. “my-glucose-model”, “my-stethoscope-model”) and switch between them in the UI. For now, use this as a guide to building responsible health AI.