pub struct BatchUploader {
pub buffer: Arc<Mutex<Vec<Sample>>>,
shutdown: Arc<AtomicBool>,
upload_interval_secs: u64,
sample_interval_secs: u64,
}Fields§
§buffer: Arc<Mutex<Vec<Sample>>>Buffer shared with the main thread.
shutdown: Arc<AtomicBool>Set to true by request_shutdown() to trigger a final flush.
upload_interval_secs: u64Polling interval for the upload thread (seconds, default 60).
sample_interval_secs: u64Sampling interval (seconds) – needed to compute per-interval byte counts in CSV.
Implementations§
Source§impl BatchUploader
impl BatchUploader
Sourcepub fn new(
upload_interval_secs: u64,
sample_interval_secs: u64,
) -> (Self, Arc<Mutex<Vec<Sample>>>)
pub fn new( upload_interval_secs: u64, sample_interval_secs: u64, ) -> (Self, Arc<Mutex<Vec<Sample>>>)
Create a new BatchUploader and return the shared SampleBuffer
so the main thread can push samples into it.
Sourcepub fn shutdown_flag(&self) -> Arc<AtomicBool>
pub fn shutdown_flag(&self) -> Arc<AtomicBool>
Clone the shutdown flag so main.rs can signal the upload thread to
flush and exit after moving self into the spawned thread.
Sourcepub fn spawn(
self,
ctx: Arc<Mutex<RunContext>>,
agent: Agent,
api_base: String,
token: String,
) -> JoinHandle<Vec<String>>
pub fn spawn( self, ctx: Arc<Mutex<RunContext>>, agent: Agent, api_base: String, token: String, ) -> JoinHandle<Vec<String>>
Spawn the background upload thread.
The thread wakes every upload_interval_secs, drains the buffer, builds
a .csv.gz batch (gzip-compressed CSV, Content-Type: application/gzip),
and uploads it to S3. On shutdown signal it performs one final flush
before exiting.
Returns a JoinHandle<Vec<String>> whose value is the list of all
successfully uploaded S3 URIs (e.g. "s3://bucket/prefix/run-id/000000.csv.gz").
The caller uses this list to decide the /finish route:
- non-empty →
data_source: "s3"withdata_uris - empty →
data_source: "inline"withdata_csv