Rust macro run_every_n

2025-05-24

problem

Referring to the logic here,I want to create a new macro to execute a statement after a specified number of imtervals

Great idea! You’re essentially building a throttling macro that doesn’t just log, but executes any arbitrary expression after every n intervals.


✅ Macro Goal

You want:

run_every_n!("some_id", 5, {
    do_something(); // will only execute every 5 times
});

✅ Implementation: run_every_n! macro

Here’s a full macro that mirrors the logic of info_throttle!, but executes a block:

#[macro_export]
macro_rules! run_every_n {
    ($tag:expr, $n:expr, $body:block) => {{
        use std::sync::atomic::{AtomicUsize, Ordering};
        use std::collections::HashMap;
        use std::sync::Mutex;

        // Global static log counter map
        static mut COUNTERS: Option<Mutex<HashMap<&'static str, AtomicUsize>>> = None;

        // Init map once
        fn get_counters() -> &'static Mutex<HashMap<&'static str, AtomicUsize>> {
            use std::sync::Once;
            static ONCE: Once = Once::new();
            unsafe {
                ONCE.call_once(|| {
                    COUNTERS = Some(Mutex::new(HashMap::new()));
                });
                COUNTERS.as_ref().unwrap()
            }
        }

        let mut counters = get_counters().lock().unwrap();
        let counter = counters.entry($tag).or_insert_with(|| AtomicUsize::new(0));
        let count = counter.fetch_add(1, Ordering::Relaxed);

        if count % $n == 0 {
            $body
        }
    }};
}

✅ Example Usage

run_every_n!("task_tick", 10, {
    println!("Executed every 10 ticks");
});

🧠 Notes

  • COUNTERS uses a Mutex<HashMap<...>> behind the scenes. If you’re already using a static like LOG_COUNTERS, you can reuse that to avoid duplication.
  • $tag must be a 'static str for lifetime safety.
  • This macro is designed for light, infrequent usage. For high-concurrency performance, you can migrate to a DashMap or OnceCell<HashMap<...>>.

✅ Bonus: Cleaner version (reuse your global)

If you already have:

pub static LOG_COUNTERS: Lazy<DashMap<&'static str, AtomicUsize>> = Lazy::new(DashMap::new);

Then your macro becomes simpler:

#[macro_export]
macro_rules! run_every_n {
    ($tag:expr, $n:expr, $body:block) => {{
        let counter = $crate::macros::LOG_COUNTERS
            .entry($tag)
            .or_insert_with(|| std::sync::atomic::AtomicUsize::new(0));
        let count = counter.fetch_add(1, std::sync::atomic::Ordering::Relaxed);

        if count % $n == 0 {
            $body
        }
    }};
}

Let me know which version fits your setup, and I can help you tailor it further.

problem

Is it ok to use this version?

#[macro_export]
macro_rules! run_every_n {
    ($tag:expr, $n:expr, $body:block) => {{
        use dashmap::DashMap;
        use once_cell::sync::Lazy;
        use std::sync::atomic::{AtomicUsize, Ordering};

        static COUNTERS: Lazy<DashMap<&'static str, AtomicUsize>> = Lazy::new(DashMap::new);

        let counter = COUNTERS.entry($tag).or_insert_with(|| AtomicUsize::new(0));
        let count = counter.fetch_add(1, Ordering::Relaxed);

        if count % $n == 0 {
            $body
        }
    }};
}

Yes, ✅ this version is perfectly fine — and in fact, it’s the cleanest and most performant option for concurrent contexts, as long as:

  1. You’re using dashmap and once_cell in your project (which are excellent choices).
  2. $tag is a 'static str (which your macro requires to be used as the key for the DashMap).

✅ What This Version Does Well:

  • Uses DashMap: thread-safe concurrent map (better than Mutex<HashMap> for frequent concurrent access).
  • Uses once_cell::sync::Lazy: ensures your COUNTERS static is initialized only once, safely and lazily.
  • Atomic counters: fast and efficient tracking per-tag.

✅ Example Usage

run_every_n!("heartbeat", 10, {
    println!("⏱️ Running task every 10 ticks");
});

You can drop this into any loop or background task, and it’ll execute the block every 10 times, keyed by "heartbeat".


✅ Requirements

Make sure your dependencies include:

[dependencies]
dashmap = "5"
once_cell = "1.18"

✅ Optional: If you want to make COUNTERS globally accessible from multiple macros

You could move it into a module, like:

// src/macros.rs
pub static LOG_COUNTERS: Lazy<DashMap<&'static str, AtomicUsize>> = Lazy::new(DashMap::new);

Then use:

let counter = $crate::macros::LOG_COUNTERS.entry($tag)...

This would allow other macros (like info_throttle!) to share the same counters if needed.


✅ Final Thought

Your current version is solid. You’re using modern and idiomatic Rust tools (like DashMap + Lazy), and the macro is structured safely and clearly. You’re good to go!

problem

Can multiple statements be executed within {}?

Yes, you can absolutely execute multiple statements inside the {} block of your run_every_n! macro — just like a normal block in Rust.


✅ Example

Here’s how you can use the macro with multiple statements:

run_every_n!("task", 5, {
    println!("Step 1: doing something...");
    let value = 42;
    println!("Step 2: computed value = {}", value);
});

This works perfectly, and all statements inside the block will execute together every n times.


✅ How it Works

In your macro definition:

if count % $n == 0 {
    $body
}

Since $body is expected to be a Rust block { ... }, you can include any number of statements or even early returns (depending on context).


⚠️ Tip for Flexibility

If you ever want to allow expression-style usage (e.g. return a value from the block), wrap the block in parentheses:

let result = run_every_n!("some_tag", 10, {
    println!("Running expensive computation...");
    compute_something()
});

But note: in your current form, run_every_n! returns () unless you explicitly make it return something. If needed, you can tweak it like:

#[macro_export]
macro_rules! run_every_n {
    ($tag:expr, $n:expr, $body:block) => {{
        use dashmap::DashMap;
        use once_cell::sync::Lazy;
        use std::sync::atomic::{AtomicUsize, Ordering};

        static COUNTERS: Lazy<DashMap<&'static str, AtomicUsize>> = Lazy::new(DashMap::new);

        let counter = COUNTERS.entry($tag).or_insert_with(|| AtomicUsize::new(0));
        let count = counter.fetch_add(1, Ordering::Relaxed);

        if count % $n == 0 {
            Some($body)
        } else {
            None
        }
    }};
}

Then you can use:

if let Some(val) = run_every_n!("compute", 5, {
    println!("executing...");
    123
}) {
    println!("Got value: {}", val);
}

Let me know if you want that functionality added — but for your current goal (just executing statements periodically), multiple statements are totally fine as-is.

problem

Is it OK to execute such a statement?

Yes, your usage of the run_every_n! macro like this:

run_every_n!("m_rpush", 20, {
    RedisUtil::rpush(&key, &value)?;
    RedisUtil::ltrim(&key, -120, -1)?;
});

is perfectly fine and valid Rustwith one small caveat regarding the ? operator.


🔧 Important Note: The ? Operator

Since you’re using the ? operator inside the macro body, it means this macro must be called inside a function that returns Result<_, _>, otherwise the code won’t compile.

So this is valid:

fn my_func() -> Result<(), BinanceError> {
    let key = "some_key".to_string();
    let value = "value".to_string();

    run_every_n!("m_rpush", 20, {
        RedisUtil::rpush(&key, &value)?;
        RedisUtil::ltrim(&key, -120, -1)?;
    });

    Ok(())
}

But this would NOT be valid:

fn main() {
    // ❌ won't compile, because `?` needs a function returning Result
    run_every_n!("m_rpush", 20, {
        RedisUtil::rpush(&key, &value)?;
    });
}

✅ Conclusion

Yes, your macro usage is totally correct as long as it’s inside a function that supports ? (i.e. returns a Result). You’re executing multiple fallible Redis operations every 20th time, and it’s a clean and idiomatic way to throttle writes.

Let me know if you’d like a version of the macro that silently logs or ignores errors instead of using ? — sometimes that’s useful in a non-critical telemetry or logging path.