Storage retention
hdevalke opened this issue · comments
Thank you for this nice project.
Storing the mails in a hashmap eventually will make the process crash with oom.
I had a quick look and it looks like it should be possible to add a retention interval to keep the memory usage under control.
I had a quick look into the code and i think this patch is one way to implement it.
diff --git a/backend/src/main.rs b/backend/src/main.rs
index f9ea6d5..467a484 100644
--- a/backend/src/main.rs
+++ b/backend/src/main.rs
@@ -8,13 +8,14 @@ use std::{
process,
str::FromStr,
sync::{Arc, RwLock},
+ time::SystemTime,
};
use tokio::sync::broadcast::{Receiver, Sender};
use tokio::time::Duration;
use tokio_graceful_shutdown::{SubsystemHandle, Toplevel};
use tracing::{event, Level};
use tracing_subscriber::{prelude::__tracing_subscriber_SubscriberExt, util::SubscriberInitExt};
-use types::{MailMessage, MessageId};
+use types::{MailMessage, MailMessageMetadata, MessageId};
use crate::web_server::http_server;
@@ -27,6 +28,7 @@ pub struct AppState {
storage: RwLock<HashMap<MessageId, MailMessage>>,
prefix: String,
index: Option<String>,
+ retention_interval: Duration,
}
#[derive(RustEmbed)]
@@ -63,6 +65,7 @@ async fn storage(
handle: SubsystemHandle,
) -> Result<(), Infallible> {
let mut running = true;
+ let mut retention_interval = tokio::time::interval(state.retention_interval);
while running {
tokio::select! {
@@ -73,6 +76,21 @@ async fn storage(
}
}
},
+ _ = retention_interval.tick() => {
+ if let Ok(mut storage) = state.storage.write() {
+ storage.retain(|_, mail_message| {
+ let meta = MailMessageMetadata::from(mail_message.clone());
+ let remove_before = std::time::SystemTime::now()
+ .checked_sub(Duration::from_secs(60))
+ .unwrap()
+ .duration_since(SystemTime::UNIX_EPOCH)
+ .unwrap()
+ .as_secs() as i64;
+ meta.time
+ > remove_before
+ });
+ }
+ }
_ = handle.on_shutdown_requested() => {
running = false;
}
@@ -129,6 +147,13 @@ async fn main() {
let prefix = std::env::var("MAILCRAB_PREFIX").unwrap_or_default();
let prefix = format!("/{}", prefix.trim_matches('/'));
+ let retention_interval =
+ std::env::var("MAILCRAB_RETENTION_INTERVAL").unwrap_or_else(|_| "3600".into());
+ let retention_interval = retention_interval
+ .parse::<u64>()
+ .map(Duration::from_secs)
+ .unwrap_or_default();
+
event!(
Level::INFO,
"MailCrab HTTP server starting on {http_host}:{http_port} and SMTP server on {smtp_host}:{smtp_port}"
@@ -142,6 +167,7 @@ async fn main() {
storage: Default::default(),
index: load_index(&prefix),
prefix,
+ retention_interval,
});
// store broadcasted messages in a key/value store
... think this patch is one way to implement it.
Please promote that diff into a merge request.
I'll test this out a bit more and will open a pull request
See #87
Your examples checks every retention_interval
and removes messages older than 60 seconds.
I've changed this to keep messages for retention_interval
and check it every tenth of the interval, with a minimum of 60 seconds.
Thanks for adding this. I could not find the time yet.
Thanks for adding this. I could not find the time yet.
When you have, please report back.
Long version of that request:
I have seen 29097ca and only understood
@@ -129,6 +154,9 @@ async fn main() {
let prefix = std::env::var("MAILCRAB_PREFIX").unwrap_or_default();
let prefix = format!("/{}", prefix.trim_matches('/'));
+ // optional retention period, the default is 0 - which means messages are kept forever
+ let retention_period: u64 = parse_env_var("MAILCRAB_RETENTION_PERIOD", 0);
+
event!(
Level::INFO,
"MailCrab HTTP server starting on {http_host}:{http_port} and SMTP server on {smtp_host}:{smtp_port}"
so default is the risk for OOM, Out of Memory, errors still present.
Having the same behaviour as before the change, is fine.
Thing is when I spent time on checking the feature, I can't spent time on other Mailcrab items that need Tenderness, Love and Care. Hence this update that most likely will be seen by the original poster.