Service resource management: i.e. Mailman3 vs Mailman2

I found a nifty trick for measuring the resource consumption of various services using systemd. This was while trying to gauge the differences between Mailman v2 and Mailman v3 resource management. I have ranted about how much more expensive MM3 is in resource consumption (basically, about ¾ of a 2GB RAM VPS is consumed). I wanted further details. What I found was, a setting (or several) that systemd provides: There are a couple methods for enabling the *Accounting= features: Edit /etc/systemd/system.conf and enabling: DefaultCPUAccounting=yes DefaultIOAccounting=yes DefaultIPAccounting=yes DefaultBlockIOAccounting=yes DefaultMemoryAccounting=yes DefaultTasksAccounting=yes If one wants it for a single service, also a couple methods: systemctl set-property mailman3 TasksAccounting=yes MemoryAccounting=yes systemctl daemon-reload Or add these lines (my preferred method, no systemctl daemon-reload needed): # systemctl edit mailman3 [Service] MemoryAccounting=yes TasksAccounting=yes Or: do it manually This makes it really easy to measure resources used by a service that has many processes running, i.e.: # systemctl status mailman ... Tasks: 9 (limit: 2256) Memory: 46.5M CGroup: /system.slice/mailman.service ├─1054 /usr/bin/python2 /usr/lib/mailman/bin/mailmanctl ├─1056 /usr/bin/python2 /var/lib/mailman/bin/qrunner ├─1057 /usr/bin/python2 /var/lib/mailman/bin/qrunner ├─1058 /usr/bin/python2 /var/lib/mailman/bin/qrunner ├─1067 /usr/bin/python2 /var/lib/mailman/bin/qrunner ├─1068 /usr/bin/python2 /var/lib/mailman/bin/qrunner ├─1071 /usr/bin/python2 /var/lib/mailman/bin/qrunner ├─1083 /usr/bin/python2 /var/lib/mailman/bin/qrunner └─1084 /usr/bin/python2 /var/lib/mailman/bin/qrunner And for comparison, mailman 3 (which has TWO service files): # systemctl status mailman3 ... Tasks: 17 Memory: 889.9M CGroup: /system.slice/mailman3.service ├─4112698 /opt/mailman/venv/bin/python ├─4112711 /opt/mailman/venv/bin/python ├─4112712 /opt/mailman/venv/bin/python ├─4112713 /opt/mailman/venv/bin/python ├─4112714 /opt/mailman/venv/bin/python ├─4112715 /opt/mailman/venv/bin/python ├─4112717 /opt/mailman/venv/bin/python ├─4112718 /opt/mailman/venv/bin/python ├─4112719 /opt/mailman/venv/bin/python ├─4112720 /opt/mailman/venv/bin/python ├─4112721 /opt/mailman/venv/bin/python ├─4112725 /opt/mailman/venv/bin/python ├─4112726 /opt/mailman/venv/bin/python ├─4112790 /opt/mailman/venv/bin/python └─4112791 /opt/mailman/venv/bin/python # systemctl status mailman3-web ... Tasks: 12 Memory: 182.4M CGroup: /system.slice/mailman3-web.service ├─4112890 /opt/mailman/venv/bin/uwsgi ├─4112904 /opt/mailman/venv/bin/uwsgi ├─4112905 /opt/mailman/venv/bin/uwsgi ├─4112906 /bin/sh ... ├─4112908 /opt/mailman/venv/bin/python ├─4112910 /opt/mailman/venv/bin/python ├─4112912 /opt/mailman/venv/bin/python ├─4112913 /opt/mailman/venv/bin/python ├─4112914 /opt/mailman/venv/bin/python └─4112915 /opt/mailman/venv/bin/python It's now quite clear: mm2 uses 9 process and < 50 MB RAM, but MM3 uses 17+12=29 processes (and that's with the default NNTP bridge disabled), and 889.9+182.4=1072.3 MB RAM. That's a *lot* of Python processes! Also, both require Apache (or NGINX), and MM3 requires PostgreSQL. As it is, I can see a lot of small mailing list users abandoning MM3 over this issue (plus its complexity). For my list, that's about one Python interpreter loaded into RAM for each list subscriber!

Thank you for this info, imnsho, many projects are becoming 'over developed' (my pet peeve is all desktop environments) - we are still using mm2 and if no longer supported, we will probably fork or maintain security as the functionality is enough and it is working fine. I do not use MM3 and I was wondering, is it the many modules that are using all the resources? (I have not used google, but I am thinking maybe hyperkitty (i think it's called?) and/or any of the other modules that are resource hungry? (can some of the modules be disabled?) On Thu, 30 Jan 2025 03:30:05 -0800 Ron / BCLUG via talk <talk@gtalug.org> wrote:
I found a nifty trick for measuring the resource consumption of various services using systemd.
This was while trying to gauge the differences between Mailman v2 and Mailman v3 resource management.
I have ranted about how much more expensive MM3 is in resource consumption (basically, about ¾ of a 2GB RAM VPS is consumed).
I wanted further details.
What I found was, a setting (or several) that systemd provides:
There are a couple methods for enabling the *Accounting= features:
Edit /etc/systemd/system.conf and enabling: DefaultCPUAccounting=yes DefaultIOAccounting=yes DefaultIPAccounting=yes DefaultBlockIOAccounting=yes DefaultMemoryAccounting=yes DefaultTasksAccounting=yes
If one wants it for a single service, also a couple methods:
systemctl set-property mailman3 TasksAccounting=yes MemoryAccounting=yes systemctl daemon-reload
Or add these lines (my preferred method, no systemctl daemon-reload needed):
# systemctl edit mailman3
[Service] MemoryAccounting=yes TasksAccounting=yes
Or: do it manually
This makes it really easy to measure resources used by a service that has many processes running, i.e.:
# systemctl status mailman ... Tasks: 9 (limit: 2256) Memory: 46.5M CGroup: /system.slice/mailman.service ├─1054 /usr/bin/python2 /usr/lib/mailman/bin/mailmanctl ├─1056 /usr/bin/python2 /var/lib/mailman/bin/qrunner ├─1057 /usr/bin/python2 /var/lib/mailman/bin/qrunner ├─1058 /usr/bin/python2 /var/lib/mailman/bin/qrunner ├─1067 /usr/bin/python2 /var/lib/mailman/bin/qrunner ├─1068 /usr/bin/python2 /var/lib/mailman/bin/qrunner ├─1071 /usr/bin/python2 /var/lib/mailman/bin/qrunner ├─1083 /usr/bin/python2 /var/lib/mailman/bin/qrunner └─1084 /usr/bin/python2 /var/lib/mailman/bin/qrunner
And for comparison, mailman 3 (which has TWO service files):
# systemctl status mailman3 ... Tasks: 17 Memory: 889.9M CGroup: /system.slice/mailman3.service ├─4112698 /opt/mailman/venv/bin/python ├─4112711 /opt/mailman/venv/bin/python ├─4112712 /opt/mailman/venv/bin/python ├─4112713 /opt/mailman/venv/bin/python ├─4112714 /opt/mailman/venv/bin/python ├─4112715 /opt/mailman/venv/bin/python ├─4112717 /opt/mailman/venv/bin/python ├─4112718 /opt/mailman/venv/bin/python ├─4112719 /opt/mailman/venv/bin/python ├─4112720 /opt/mailman/venv/bin/python ├─4112721 /opt/mailman/venv/bin/python ├─4112725 /opt/mailman/venv/bin/python ├─4112726 /opt/mailman/venv/bin/python ├─4112790 /opt/mailman/venv/bin/python └─4112791 /opt/mailman/venv/bin/python
# systemctl status mailman3-web ... Tasks: 12 Memory: 182.4M CGroup: /system.slice/mailman3-web.service ├─4112890 /opt/mailman/venv/bin/uwsgi ├─4112904 /opt/mailman/venv/bin/uwsgi ├─4112905 /opt/mailman/venv/bin/uwsgi ├─4112906 /bin/sh ... ├─4112908 /opt/mailman/venv/bin/python ├─4112910 /opt/mailman/venv/bin/python ├─4112912 /opt/mailman/venv/bin/python ├─4112913 /opt/mailman/venv/bin/python ├─4112914 /opt/mailman/venv/bin/python └─4112915 /opt/mailman/venv/bin/python
It's now quite clear: mm2 uses 9 process and < 50 MB RAM, but MM3 uses 17+12=29 processes (and that's with the default NNTP bridge disabled), and 889.9+182.4=1072.3 MB RAM.
That's a *lot* of Python processes!
Also, both require Apache (or NGINX), and MM3 requires PostgreSQL.
As it is, I can see a lot of small mailing list users abandoning MM3 over this issue (plus its complexity).
For my list, that's about one Python interpreter loaded into RAM for each list subscriber!
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk

Ron / BCLUG via talk said on Thu, 30 Jan 2025 03:30:05 -0800
I found a nifty trick for measuring the resource consumption of various services using systemd.
This was while trying to gauge the differences between Mailman v2 and Mailman v3 resource management.
I have ranted about how much more expensive MM3 is in resource consumption (basically, about ¾ of a 2GB RAM VPS is consumed).
I wanted further details.
What are your results when you run that same testing process on systemd itself? SteveT Steve Litt http://444domains.com

Steve Litt via talk wrote on 2025-01-30 15:40:
What are your results when you run that same testing process on systemd itself?
How do you measure system resources without systemd? IO, IP traffic, tasks / processes per job, memory consumption across a set of processes, CPU usage, etc. need to be accounted for. To answer your question, first I ran `ps -e | wc -l` and got: # ps -e | wc -l 482 Then I chose `systemctl status system.slice` to get this list (copied & pasted, the reader can deal with the formatting, looks nice in `less` - the default presentation mode - has colours too): ● system.slice - System Slice Loaded: loaded Active: active since Fri 2025-01-24 12:58:53 PST; 6 days ago Docs: man:systemd.special(7) IP: 1.9G in, 52.2M out IO: 227.1G read, 122.4G written Tasks: 173 Memory: 416.1M CPU: 1d 18h 25min 36.543s CGroup: /system.slice ├─NetworkManager.service │ └─1458 /usr/sbin/NetworkManager --no-daemon ├─accounts-daemon.service │ └─1448 /usr/libexec/accounts-daemon ├─acpid.service │ └─1449 /usr/sbin/acpid ├─atop.service │ └─1704670 /usr/bin/atop -R -w /var/log/atop/atop_20250130 600 ├─atopacct.service │ └─1454 /usr/sbin/atopacctd ├─avahi-daemon.service │ ├─1452 "avahi-daemon: running [w00.local]" │ └─1516 "avahi-daemon: chroot helper" ├─colord.service │ └─1680 /usr/libexec/colord ├─cron.service │ └─1456 /usr/sbin/cron -f -P ├─cups-browsed.service │ └─1704678 /usr/sbin/cups-browsed ├─cups.service │ ├─1704619 /usr/sbin/cupsd -l │ ├─1704676 /usr/lib/cups/notifier/dbus dbus:// "" │ ├─1704677 /usr/lib/cups/notifier/dbus dbus:// "" │ └─1704909 /usr/lib/cups/notifier/dbus dbus:// "" ├─dbus.service │ └─1457 @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only ├─fwupd.service │ └─119716 /usr/libexec/fwupd/fwupd ├─haveged.service │ └─1390 /usr/sbin/haveged --Foreground --verbose=1 ├─irqbalance.service │ └─1464 /usr/sbin/irqbalance --foreground ├─libvirtd.service │ ├─1630 /usr/sbin/libvirtd │ ├─1843 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper │ └─1844 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper ├─lxc-monitord.service │ └─1631 /usr/lib/x86_64-linux-gnu/lxc/lxc-monitord --daemon ├─lxc-net.service │ └─2142 dnsmasq --conf-file=/dev/null -s lxc -S /lxc/ -u lxc-dnsmasq --strict-order --bind-interfaces --pid-file=/run/lxc/dnsmasq.pid --listen-address 10.0.3.1 --dhcp-range 10.0.3.2,10.0.3.254 --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative ├─lxcfs.service │ └─1470 /usr/bin/lxcfs /var/lib/lxcfs ├─mdmonitor.service │ └─1079 /sbin/mdadm --monitor --scan ├─networkd-dispatcher.service │ └─1474 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers ├─nscd.service │ └─1482 /usr/sbin/nscd ├─polkit.service │ └─1632 /usr/libexec/polkitd --no-debug ├─power-profiles-daemon.service │ └─1480 /usr/libexec/power-profiles-daemon ├─rpcbind.service │ └─1387 /sbin/rpcbind -f -w ├─rsyslog.service │ └─1483 /usr/sbin/rsyslogd -n -iNONE ├─rtkit-daemon.service │ └─2956 /usr/libexec/rtkit-daemon ├─sddm.service │ ├─2922 /usr/bin/sddm │ └─2926 /usr/lib/xorg/Xorg -nolisten tcp -background none -seat seat0 vt2 -auth /run/sddm/xauth_OgQFZJ -noreset -displayfd 16 ├─smartmontools.service │ └─1484 /usr/sbin/smartd -n ├─snap.cups.cups-browsed.service │ ├─ 1633 /bin/sh /snap/cups/1067/scripts/run-cups-browsed │ ├─ 2207 /bin/sh /snap/cups/1067/scripts/run-cups-browsed │ └─1329154 sleep 3600 ├─snap.cups.cupsd.service │ ├─1634 /bin/sh /snap/cups/1067/scripts/run-cupsd │ ├─2084 cupsd -f -s /var/snap/cups/common/etc/cups/cups-files.conf -c /var/snap/cups/common/etc/cups/cupsd.conf │ └─2085 cups-proxyd /var/snap/cups/common/run/cups.sock /run/cups/cups.sock -l --logdir /var/snap/cups/1067/var/log ├─snapd.service │ └─1491 /usr/lib/snapd/snapd ├─ssh.service │ ├─ 1678 "sshd: /usr/sbin/sshd -D [listener] 1 of 10-100 startups" │ └─1345219 "sshd: [accepted]" "" "" "" "" ├─switcheroo-control.service │ └─1492 /usr/libexec/switcheroo-control ├─system-postfix.slice │ └─postfix@-.service │ ├─ 2913 /usr/lib/postfix/sbin/master -w │ ├─ 2915 qmgr -l -t unix -u │ └─1324231 pickup -l -t unix -u -c ├─system-postgresql.slice │ └─postgresql@14-main.service │ ├─ 2165 /usr/lib/postgresql/14/bin/postgres -D /var/lib/postgresql/14/main -c config_file=/etc/postgresql/14/main/postgresql.conf │ ├─ 2182 "postgres: 14/main: checkpointer " "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" │ ├─ 2183 "postgres: 14/main: background writer " "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" │ ├─ 2184 "postgres: 14/main: walwriter " "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" │ ├─ 2185 "postgres: 14/main: autovacuum launcher " "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" │ ├─ 2186 "postgres: 14/main: stats collector " "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" │ ├─ 2187 "postgres: 14/main: logical replication launcher " "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" │ └─2406063 "postgres: 14/main: postgres rfm [local] idle" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ├─systemd-journald.service │ └─610 /lib/systemd/systemd-journald ├─systemd-logind.service │ └─1493 /lib/systemd/systemd-logind ├─systemd-machined.service │ └─1494 /lib/systemd/systemd-machined ├─systemd-networkd.service │ └─1370 /lib/systemd/systemd-networkd ├─systemd-resolved.service │ └─1401 /lib/systemd/systemd-resolved ├─systemd-timesyncd.service │ └─1388 /lib/systemd/systemd-timesyncd ├─systemd-udevd.service │ └─643 /lib/systemd/systemd-udevd ├─udisks2.service │ └─1496 /usr/libexec/udisks2/udisksd ├─unattended-upgrades.service │ └─1655 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal ├─upower.service │ └─3491 /usr/libexec/upowerd ├─virtlogd.service │ └─2088 /usr/sbin/virtlogd ├─wpa_supplicant.service │ └─1497 /sbin/wpa_supplicant -u -s -O /run/wpa_supplicant └─zfs-zed.service └─1499 /usr/sbin/zed -F [snipped some log entries]

From: Ron / BCLUG via talk <talk@gtalug.org>
Tasks: 9 (limit: 2256) Memory: 46.5M
And for comparison, mailman 3 (which has TWO service files):
Tasks: 17 Memory: 889.9M
Tasks: 12 Memory: 182.4M
Why is all that space being used? I would expect that most of the space is taken by immutable and hence shareable stuff (code). Is this containerized in some way that prevents sharing? Is Python's runtime organized in some way that prevents sharing? Are these statistics counting shared memory multiple times?

D. Hugh Redelmeier via talk wrote on 2025-02-01 11:14:
Why is all that space being used?
I would expect that most of the space is taken by immutable and hence shareable stuff (code).
I think this is correct: basically most of the processes are Python interpreters plus the scripts they're running.
Is this containerized in some way that prevents sharing?
No.
Is Python's runtime organized in some way that prevents sharing?
I don't think so, I think it's just that, instead of forking / spawning processes to handle events, it's using IPC or something to have the processes talk to each other. Probably more efficient on large, busy lists, but overkill for small ones. The Apache model of child processes being passed work loads would have been a *much* better option IMHO. Then it could be user-configurable.
Are these statistics counting shared memory multiple times?
I don't think so. Here's a Mailman v2 process list (lines trimmed to avoid wrapping): # systemctl status mailman /usr/bin/python2 /usr/lib/mailman/bin/mailmanctl -s start /usr/bin/python2 /var/lib/mailman/bin/qrunner --runner=ArchRunner /usr/bin/python2 /var/lib/mailman/bin/qrunner --runner=BounceRunner /usr/bin/python2 /var/lib/mailman/bin/qrunner --runner=CommandRunner /usr/bin/python2 /var/lib/mailman/bin/qrunner --runner=IncomingRunner /usr/bin/python2 /var/lib/mailman/bin/qrunner --runner=NewsRunner /usr/bin/python2 /var/lib/mailman/bin/qrunner --runner=OutgoingRunner /usr/bin/python2 /var/lib/mailman/bin/qrunner --runner=VirginRunner /usr/bin/python2 /var/lib/mailman/bin/qrunner --runner=RetryRunner I see in there a qrunner for NewsRunner, which is an NNTP bridge. Wish I knew how to disable that. We see similar with MM3, but many more in the main process and a couple / few uwsgi (something something gateway interface) for web request / REST processing). So, there's lots and lots of duplication, but Linux doesn't do memory de-duplication, so ... it's a resource hog for us hobbyists.

From: Ron / BCLUG via talk <talk@gtalug.org>
Thanks for looking at this
D. Hugh Redelmeier via talk wrote on 2025-02-01 11:14:
Why is all that space being used?
I would expect that most of the space is taken by immutable and hence shareable stuff (code).
I think this is correct: basically most of the processes are Python interpreters plus the scripts they're running.
Python interpreters are surely shared. That's the way Linux works for compiled languages like C. As I understand it, the Python code is pre-compiled into some kind of byte-code (.pyc file). I don't know when. If it is ahead of time, the result ought to be (largely) stored in a read-only file that gets memory mapped into RAM, thus being shared. If byte-code isn't shared, this seems like a scandal. If the space is actually being consumed by data, how could mailman be so profligate?

D. Hugh Redelmeier via talk wrote on 2025-02-02 20:30:
I think this is correct: basically most of the processes are Python interpreters plus the scripts they're running.
Python interpreters are surely shared. That's the way Linux works for compiled languages like C.
Think of it like this: $ python prog1.py & $ python prog2.py & There are now two python interpreters loaded in RAM. That's what's going on here. Similarly with compiled C programs: $ ./prog1 & $ ./prog2 & Those programs might be loading dynamic libraries, but they'd be loading them twice.
If byte-code isn't shared, this seems like a scandal.
It is indeed kind of scandalous.
If the space is actually being consumed by data, how could mailman be so profligate? Just a bunch of python scripts running.
Check it out on penguin, running mailman v2:
$ pgrep python | wc -l 8
Alternately (lines trimmed to avoid wrapping):
$ ps -ef | grep python /usr/bin/python /usr/lib/mailman/bin/mailmanctl -s -q start /usr/bin/python /var/lib/mailman/bin/qrunner --runner=ArchRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=BounceRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=CommandRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=IncomingRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=NewsRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=OutgoingRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=VirginRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=RetryRunner
I can't imagine the kernel seeing all those python scripts running and de-duplicating RAM like some file systems will de-duplicate storage (ZFS) as that is expensive itself. Corrections welcome.

Niggle: the code (.txt) of the python interpreters is shared, but not the stack, the rss nor the heap. Starting a new process still requires reading the binary from disk to get the rss contents and initial heap size. Slow! Solaris tried to improve that a bit by getting that information from the in-memory copy of the file that contained the .txt segment, but I don't know if they succeeded. One of the Helios folks from 0xide would probably know. --dave On 2/3/25 11:43, Ron / BCLUG via talk wrote:
D. Hugh Redelmeier via talk wrote on 2025-02-02 20:30:
I think this is correct: basically most of the processes are Python interpreters plus the scripts they're running.
Python interpreters are surely shared. That's the way Linux works for compiled languages like C.
Think of it like this:
$ python prog1.py & $ python prog2.py &
There are now two python interpreters loaded in RAM.
That's what's going on here.
Similarly with compiled C programs:
$ ./prog1 & $ ./prog2 &
Those programs might be loading dynamic libraries, but they'd be loading them twice.
If byte-code isn't shared, this seems like a scandal.
It is indeed kind of scandalous.
If the space is actually being consumed by data, how could mailman be so profligate? Just a bunch of python scripts running.
Check it out on penguin, running mailman v2:
$ pgrep python | wc -l 8
Alternately (lines trimmed to avoid wrapping):
$ ps -ef | grep python /usr/bin/python /usr/lib/mailman/bin/mailmanctl -s -q start /usr/bin/python /var/lib/mailman/bin/qrunner --runner=ArchRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=BounceRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=CommandRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=IncomingRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=NewsRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=OutgoingRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=VirginRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=RetryRunner
I can't imagine the kernel seeing all those python scripts running and de-duplicating RAM like some file systems will de-duplicate storage (ZFS) as that is expensive itself.
Corrections welcome. --- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
-- David Collier-Brown, | Always do right. This will gratify System Programmer and Author | some people and astonish the rest davecb@spamcop.net | -- Mark Twain

From: David Collier-Brown via talk <talk@gtalug.org>
Niggle: the code (.txt) of the python interpreters is shared, but not the stack, the rss nor the heap.
Right. Except you don't mean rss, you mean data and bss segments (at least in the old days). ELF support a confusing array of segments, each with different characteristics.
Starting a new process still requires reading the binary from disk to get the rss contents and initial heap size. Slow!
TL;DR: I dig a little deeper but still have no explanation. Actually, ELF is designed to not require all of the binary. I would hope not: it can include extensive tables for debuggers). I think that loaders these days use memory mapped I/O. ELF has been designed to facilitate that. Only things that are touched get loaded. Read-only segments are shared. $ size /bin/python3 text data bss dec hex filename 1789 668 4 2461 99d /bin/python3 So: this is very small. it must just load the real python3 from somewhere else. $ ldd /bin/python3 linux-vdso.so.1 (0x00007fb51bf6e000) libpython3.13.so.1.0 => /lib64/libpython3.13.so.1.0 (0x00007fb51b800000) libc.so.6 => /lib64/libc.so.6 (0x00007fb51b60d000) libm.so.6 => /lib64/libm.so.6 (0x00007fb51be69000) /lib64/ld-linux-x86-64.so.2 (0x00007fb51bf70000) $ size /lib64/libpython3.13.so.1.0 text data bss dec hex filename 4611602 860128 496489 5968219 5b115b /lib64/libpython3.13.so.1.0 So there will be one copy of the text in the system memory even if there are hundreds of pythons. Some of the data segment will have one copy, some will be one-per-process. (There will be a bunch of data segments; the read-only ones will be shared.) The bss segment is uninitialized static variable. UNIX/Linux dictates that they will be zeroed before the probram starts. They occupy no disk space. The heap and stack don't show up until runtime. The biggest component is the immutable code: 4.5MB. that cannot explain memory use in the gigabyte range. Especially since there will be only one copy. The python code for (old) Mailman is mostly in /usr/lib/mailman/Mailman. The size of it, including .py and .pyc files, is only 3MB. That doesn't seem to be the culprit. But we really want to study Mailman2, and I don't have it.

On 2/4/25 20:32, D. Hugh Redelmeier via talk wrote:
From: David Collier-Brown via talk<talk@gtalug.org> Niggle: the code (.txt) of the python interpreters is shared, but not the stack, the rss nor the heap. Right. Except you don't mean rss, you mean data and bss segments (at least in the old days).
Whoops! BSS is "block staring with symbol". RSS is "really simple syndication" .. or just /maybe/ "rat starting with symbol" (:-)) --dave -- David Collier-Brown, | Always do right. This will gratify System Programmer and Author | some people and astonish the rest davecb@spamcop.net | -- Mark Twain

From: David Collier-Brown via talk <talk@gtalug.org>
BSS is "block staring with symbol". RSS is "really simple syndication" .. or just /maybe/ "rat starting with symbol" (:-))
RSS could (more relevantly) be Resident Set Size. But that's a runtime property. BSS (Block Starting with Symbol) was an IBM 7xx or 7xxx series assembler statement for reserving but not initializing a chunk of memory. Something like * RESERVE 1 36-BIT WORD AND NAME IT "COUNTER" COUNTER BSS 1 * RESERVE ENOUGH SPACE FOR A LINE OF TEXT (6 CHARACTERS PER WORD) LINE BSS 120 / 6 BES was Block Ending with Symbol. The label is given the address of the last word. BES hasn't been immortalized. IBM assembers for later machines used DS instead of BSS. This is where BSS segment name came from. The segment is just for statically allocated but not initialized memory. The mnemonic is no longer mnemonic.

D. Hugh Redelmeier via talk wrote on 2025-02-02 20:30:
I think this is correct: basically most of the processes are Python interpreters plus the scripts they're running.
Python interpreters are surely shared. That's the way Linux works for compiled languages like C.
Think of it like this:
$ python prog1.py & $ python prog2.py & If my memory serves me correctly all the code is shared between the
On 2025-02-03 11:43, Ron / BCLUG via talk wrote: processes Its just the data which is private to each process. Now in the case of an interpreter, the xxx.py programs are just data from the OS point of view so that is duplicated.
There are now two python interpreters loaded in RAM.
That's what's going on here.
Similarly with compiled C programs:
$ ./prog1 & $ ./prog2 &
Those programs might be loading dynamic libraries, but they'd be loading them twice.
Once again I am fairly sure only the data is duplicated and the code is shared and write protected.
If byte-code isn't shared, this seems like a scandal.
It is indeed kind of scandalous.
It should be possible for the byte-code to be shared among processes but that would likely add a layer of complexity that the developers would not be comfortable with.
If the space is actually being consumed by data, how could mailman be so profligate? Just a bunch of python scripts running.
Check it out on penguin, running mailman v2:
$ pgrep python | wc -l 8
Alternately (lines trimmed to avoid wrapping):
$ ps -ef | grep python /usr/bin/python /usr/lib/mailman/bin/mailmanctl -s -q start /usr/bin/python /var/lib/mailman/bin/qrunner --runner=ArchRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=BounceRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=CommandRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=IncomingRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=NewsRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=OutgoingRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=VirginRunner /usr/bin/python /var/lib/mailman/bin/qrunner --runner=RetryRunner
I can't imagine the kernel seeing all those python scripts running and de-duplicating RAM like some file systems will de-duplicate storage (ZFS) as that is expensive itself.
Corrections welcome. --- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
-- Alvin Starr || land: (647)478-6285 Netvel Inc. || home: (905)513-7688 alvin@netvel.net ||

Alvin Starr via talk wrote on 2025-02-03 10:20:
Think of it like this:
$ python prog1.py &
$ python prog2.py &
If my memory serves me correctly all the code is shared between the processes Its just the data which is private to each process. Now in the case of an interpreter, the xxx.py programs are just data from the OS point of view so that is duplicated.
I'd like to read more about that if anyone has links that indicate launching multiple programs (i.e. Python scripts) only load python once. The systemctl output shows multiple pythons running, each with their own PID (lines chopped to avoid wrapping): Tasks: 9 (limit: 4915) Memory: 95.2M CGroup: /system.slice/mailman.service ├─25091 /usr/bin/python /usr/lib/mailman/bin/mailmanctl ├─25092 /usr/bin/python /var/lib/mailman/bin/qrunner ├─25093 /usr/bin/python /var/lib/mailman/bin/qrunner ├─25094 /usr/bin/python /var/lib/mailman/bin/qrunner ├─25095 /usr/bin/python /var/lib/mailman/bin/qrunner ├─25096 /usr/bin/python /var/lib/mailman/bin/qrunner ├─25097 /usr/bin/python /var/lib/mailman/bin/qrunner ├─25098 /usr/bin/python /var/lib/mailman/bin/qrunner └─25099 /usr/bin/python /var/lib/mailman/bin/qrunner Surely that means python is loaded and re-loaded? I'm going to dig around a bit and see if I can account for that 95.2M RAM usage... Python2.7 interpreter size: # ls -l $(which python2.7) -h ... 3.7M 2022-02-06 15:16 /usr/bin/python2.7* The qrunner which is a python script that is invoked repeatedly with different options (which are, in turn other scripts): # ls -l /usr/lib/mailman/bin/qrunner ... 9617 2022-06-08 14:24 /usr/lib/mailman/bin/qrunner* And these are the scripts loaded by qrunner: ll /usr/lib/mailman/Mailman/Queue/ArchRunner.py* ... 3099 2022-06-08 14:24 /usr/.../Mailman/Queue/ArchRunner.py ... 1831 2022-06-22 17:55 /usr/.../Mailman/Queue/ArchRunner.pyc # ll /usr/lib/mailman/Mailman/Queue/BounceRunner.py* ... 15901 2022-06-08 14:24 /usr/.../Mailman/Queue/BounceRunner.py ... 8830 2022-06-22 17:55 /usr/.../Mailman/Queue/BounceRunner.pyc # ll /usr/lib/mailman/Mailman/Queue/CommandRunner.py* ... 11882 2022-06-08 14:24 /usr/.../Mailman/Queue/CommandRunner.py ... 7639 2022-06-22 17:55 /usr/.../Mailman/Queue/CommandRunner.pyc # ll /usr/lib/mailman/Mailman/Queue/IncomingRunner.py* ... 9080 2022-06-08 14:24 /usr/.../Mailman/Queue/IncomingRunner.py ... 2632 2022-06-22 17:55 /usr/.../Mailman/Queue/IncomingRunner.pyc # ll /usr/lib/mailman/Mailman/Queue/NewsRunner.py* ... 7365 2022-06-08 14:24 /usr/.../Mailman/Queue/NewsRunner.py ... 4303 2022-06-22 17:55 /usr/.../Mailman/Queue/NewsRunner.pyc # ll /usr/lib/mailman/Mailman/Queue/OutgoingRunner.py* ... 5707 2022-06-08 14:24 /usr/.../Mailman/Queue/OutgoingRunner.py ... 3282 2022-06-22 17:55 /usr/.../Mailman/Queue/OutgoingRunner.pyc # ll /usr/lib/mailman/Mailman/Queue/VirginRunner.py* ... 1732 2022-06-08 14:24 /usr/.../Mailman/Queue/VirginRunner.py ... 1311 2022-06-22 17:55 /usr/.../Mailman/Queue/VirginRunner.pyc # ll /usr/lib/mailman/Mailman/Queue/RetryRunner.py* ... 1481 2022-06-08 14:24 /usr/.../Mailman/Queue/RetryRunner.py ... 1439 2022-06-22 17:55 /usr/.../Mailman/Queue/RetryRunner.pyc Adding the *.pyc file sizes: 1831 + 8830 + 7639 + 2632 + 4303 + 3282 + 1311 + 1439 = 31267 bytes. Each of those is invoked via qrunner (8 instances at 9617 bytes): 9617 * 8 = 76936 bytes So far, I can account for 76936 + 31267 bytes = 108203 bytes. Need this process too: # ls -l /usr/lib/mailman/bin/mailmanctl ... 21431 2022-06-08 14:24 /usr/lib/mailman/bin/mailmanctl* Up to 21431 + 108203 = 129634 bytes. # systemctl status mailman Tasks: 9 (limit: 4915) Memory: 95.2M My numbers are way, *way* short of 96MB so let's add in some Python2.7 interpreters (one for each qrunner) as qrunner is a python script, and one for mailmanctl: # file /var/lib/mailman/bin/qrunner /var/lib/mailman/bin/qrunner: a /usr/bin/python script, ASCII text executable 9 * 3775416 = 33,978,744 MB So, accounting for all the compiled python code plus the interpreters gets to about ⅓ the RAM usage of Mailman v2. That seems... reasonable? Have I just wasted a bunch of time and entirely gotten the part about multiple Python interpreters loaded in memory?

On 2025-02-03 14:40, Ron / BCLUG via talk wrote:
Alvin Starr via talk wrote on 2025-02-03 10:20:
Think of it like this:
$ python prog1.py &
$ python prog2.py &
If my memory serves me correctly all the code is shared between the processes Its just the data which is private to each process. Now in the case of an interpreter, the xxx.py programs are just data from the OS point of view so that is duplicated.
https://www.kernel.org/doc/gorman/html/understand/understand007.html looks to describe the overall memory management of Linux.
I'd like to read more about that if anyone has links that indicate launching multiple programs (i.e. Python scripts) only load python once.
A way to check this would be to load a single python interpreter and look at the memory used. Then load multiple copies of the same program and see if the memory usage is N*mem_used and not something less than N*mem_used.
The systemctl output shows multiple pythons running, each with their own PID (lines chopped to avoid wrapping):
Tasks: 9 (limit: 4915) Memory: 95.2M CGroup: /system.slice/mailman.service ├─25091 /usr/bin/python /usr/lib/mailman/bin/mailmanctl ├─25092 /usr/bin/python /var/lib/mailman/bin/qrunner ├─25093 /usr/bin/python /var/lib/mailman/bin/qrunner ├─25094 /usr/bin/python /var/lib/mailman/bin/qrunner ├─25095 /usr/bin/python /var/lib/mailman/bin/qrunner ├─25096 /usr/bin/python /var/lib/mailman/bin/qrunner ├─25097 /usr/bin/python /var/lib/mailman/bin/qrunner ├─25098 /usr/bin/python /var/lib/mailman/bin/qrunner └─25099 /usr/bin/python /var/lib/mailman/bin/qrunner
Surely that means python is loaded and re-loaded?
you have other things going on here. There is the memory used up by the interpreter reading in the various bits of python.py and python.pyc files Then you have the memory used by the actual execution of the python programs.
I'm going to dig around a bit and see if I can account for that 95.2M RAM usage...
Python2.7 interpreter size:
# ls -l $(which python2.7) -h ... 3.7M 2022-02-06 15:16 /usr/bin/python2.7*
The qrunner which is a python script that is invoked repeatedly with different options (which are, in turn other scripts):
# ls -l /usr/lib/mailman/bin/qrunner ... 9617 2022-06-08 14:24 /usr/lib/mailman/bin/qrunner*
And these are the scripts loaded by qrunner:
ll /usr/lib/mailman/Mailman/Queue/ArchRunner.py* ... 3099 2022-06-08 14:24 /usr/.../Mailman/Queue/ArchRunner.py ... 1831 2022-06-22 17:55 /usr/.../Mailman/Queue/ArchRunner.pyc
# ll /usr/lib/mailman/Mailman/Queue/BounceRunner.py* ... 15901 2022-06-08 14:24 /usr/.../Mailman/Queue/BounceRunner.py ... 8830 2022-06-22 17:55 /usr/.../Mailman/Queue/BounceRunner.pyc
# ll /usr/lib/mailman/Mailman/Queue/CommandRunner.py* ... 11882 2022-06-08 14:24 /usr/.../Mailman/Queue/CommandRunner.py ... 7639 2022-06-22 17:55 /usr/.../Mailman/Queue/CommandRunner.pyc
# ll /usr/lib/mailman/Mailman/Queue/IncomingRunner.py* ... 9080 2022-06-08 14:24 /usr/.../Mailman/Queue/IncomingRunner.py ... 2632 2022-06-22 17:55 /usr/.../Mailman/Queue/IncomingRunner.pyc
# ll /usr/lib/mailman/Mailman/Queue/NewsRunner.py* ... 7365 2022-06-08 14:24 /usr/.../Mailman/Queue/NewsRunner.py ... 4303 2022-06-22 17:55 /usr/.../Mailman/Queue/NewsRunner.pyc
# ll /usr/lib/mailman/Mailman/Queue/OutgoingRunner.py* ... 5707 2022-06-08 14:24 /usr/.../Mailman/Queue/OutgoingRunner.py ... 3282 2022-06-22 17:55 /usr/.../Mailman/Queue/OutgoingRunner.pyc
# ll /usr/lib/mailman/Mailman/Queue/VirginRunner.py* ... 1732 2022-06-08 14:24 /usr/.../Mailman/Queue/VirginRunner.py ... 1311 2022-06-22 17:55 /usr/.../Mailman/Queue/VirginRunner.pyc
# ll /usr/lib/mailman/Mailman/Queue/RetryRunner.py* ... 1481 2022-06-08 14:24 /usr/.../Mailman/Queue/RetryRunner.py ... 1439 2022-06-22 17:55 /usr/.../Mailman/Queue/RetryRunner.pyc
Adding the *.pyc file sizes:
1831 + 8830 + 7639 + 2632 + 4303 + 3282 + 1311 + 1439 = 31267 bytes.
Each of those is invoked via qrunner (8 instances at 9617 bytes):
9617 * 8 = 76936 bytes
So far, I can account for 76936 + 31267 bytes = 108203 bytes.
A couple of things. The .py and pyc files are only the python code portion and they may load in other things like files from the disk. They also likely reference other python libraries that reference other python libraries ..... So you need to know all the python code that is loaded up when you start your program. You also have things that are staticlly allocated memory within the python program that may be a few tens of bites of code but could be arbitrary large memory blocks.
Need this process too:
# ls -l /usr/lib/mailman/bin/mailmanctl ... 21431 2022-06-08 14:24 /usr/lib/mailman/bin/mailmanctl*
Up to 21431 + 108203 = 129634 bytes.
# systemctl status mailman Tasks: 9 (limit: 4915) Memory: 95.2M
My numbers are way, *way* short of 96MB so let's add in some Python2.7 interpreters (one for each qrunner) as qrunner is a python script, and one for mailmanctl:
# file /var/lib/mailman/bin/qrunner /var/lib/mailman/bin/qrunner: a /usr/bin/python script, ASCII text executable
9 * 3775416 = 33,978,744 MB
So, accounting for all the compiled python code plus the interpreters gets to about ⅓ the RAM usage of Mailman v2.
That seems... reasonable?
Have I just wasted a bunch of time and entirely gotten the part about multiple Python interpreters loaded in memory? --- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
-- Alvin Starr || land: (647)478-6285 Netvel Inc. || home: (905)513-7688 alvin@netvel.net ||

Alvin Starr via talk wrote on 2025-02-03 12:22:
https://www.kernel.org/doc/gorman/html/understand/understand007.html looks to describe the overall memory management of Linux.
I couldn't make heads nor tails of that, it's over my head.
A way to check this would be to load a single python interpreter and look at the memory used. Then load multiple copies of the same program and see if the memory usage is N*mem_used and not something less than N*mem_used.
I don't have anywhere to test that which won't have too many confounding variables. I'll still try it if I get a chance. But I did have a thought -- if MM3 is doing basically the same tasks as MM2 but taking up so much more memory, I kinda wonder if the extra Python3 instances with their separate PIDs makes more sense than workload / data, considering "same basic functionality". In other news, I did hear back from the MM3 author (Mark Sapiro) on how to disable NNTP on MM2, so I've added that to GTALUG's config (but have *not* restarted it). It did remove one Python from my MM2 server: Freshly restarted MM2 without NNTP: Tasks: 8 (limit: 2256) Memory: 83.7M With NNTP: Tasks: 9 (limit: 2256) Memory: 93.9M Hmmm - a 10MB difference for one process - NNTP gateway. That cannot just be the Python script, it's gotta include 3.5MB Python2.7 (IMHO).

From: Ron / BCLUG via talk <talk@gtalug.org>
Thanks for looking at this. Here aer a couple of Mailman 3 merge requests that are interesting. <https://gitlab.com/mailman/mailman/-/issues/1050> <https://gitlab.com/mailman/mailman/-/merge_requests/1093> I'm not saying that the code is correct but that what has been explained is useful: - simply reducing the queue runners reduces the space requirements. I would guess that GTALUG's lists could be well served by as few as one or two queue runners. - the processes each have an unshared read/write copy of the all the python code. Very wasteful. Each process has to load the code, even though they are all the same That costs time, disk reads, and space - the code is loaded into the heap, apparently mingled with mutable things. This make sharing via Copy on Write much less effective. - Python is bone-headed about forks. In a host of ways. I knew that.

D. Hugh Redelmeier via talk wrote on 2025-02-08 09:25:
Here aer a couple of Mailman 3 merge requests that are interesting. <https://gitlab.com/mailman/mailman/-/issues/1050> <https://gitlab.com/mailman/mailman/-/merge_requests/1093>
Looks like that guy's done some good work at reducing overhead. Shame it's 2 years old. Thanks for pointing them out though, interesting reading.
I'm not saying that the code is correct but that what has been explained is useful:
- simply reducing the queue runners reduces the space requirements. I would guess that GTALUG's lists could be well served by as few as one or two queue runners.
- the processes each have an unshared read/write copy of the all the python code. Very wasteful. Each process has to load the code, even though they are all the same That costs time, disk reads, and space
- the code is loaded into the heap, apparently mingled with mutable things. This make sharing via Copy on Write much less effective.
- Python is bone-headed about forks. In a host of ways. I knew that.
This last part I didn't know. Care to expand upon it - interesting topic in its own right.

From: Ron / BCLUG via talk <talk@gtalug.org>
D. Hugh Redelmeier via talk wrote on 2025-02-08 09:25:
- Python is bone-headed about forks. In a host of ways. I knew that.
This last part I didn't know.
Care to expand upon it - interesting topic in its own right.
Disclaimer: I don't program in Python. The language itself is easy enough to understand but the libraries are a lot (good for functionality but daunting to master). Anyway, it means that I may have some things wrong about Python. Python was designed to do its own version of threads. That design didn't exploit the capabilities of modern computers: multiple cores. There appear to be a variety of hacks to get around this. Too often shared resources don't work, and they don't work in mysterious ways. global python locks are no longer global if you use a UNIX fork. This can manifest in non-deterministic ways. One problem is that these locks are thought to be private, within the implementation of some abstraction, but their misbehaviour in the face of fork is definitely a public problem. See, for example: <https://bugzilla.redhat.com/show_bug.cgi?id=1691434> <https://bugs.python.org/issue35866> It took over two years to close this bug. Discovering race conditions should not be left to test cases. This is a prime example of never being able to prove the absence of bugs through testing. The main answer from the Python people was "don't fork": the library wasn't designed to work with it. Perhaps it is better six years on.

On Thu, 6 Feb 2025 at 20:50, Ron / BCLUG via talk <talk@gtalug.org> wrote:
Alvin Starr via talk wrote on 2025-02-03 12:22:
https://www.kernel.org/doc/gorman/html/understand/understand007.html looks to describe the overall memory management of Linux.
I couldn't make heads nor tails of that, it's over my head.
A way to check this would be to load a single python interpreter and look at the memory used. Then load multiple copies of the same program and see if the memory usage is N*mem_used and not something less than N*mem_used.
I don't have anywhere to test that which won't have too many confounding variables.
I'll still try it if I get a chance.
But I did have a thought -- if MM3 is doing basically the same tasks as MM2 but taking up so much more memory, I kinda wonder if the extra Python3 instances with their separate PIDs makes more sense than workload / data, considering "same basic functionality".
In other news, I did hear back from the MM3 author (Mark Sapiro) on how to disable NNTP on MM2, so I've added that to GTALUG's config (but have *not* restarted it).
All of my previous experience managing servers has taught me to never do this: if you make a change, restart and test immediately. Or don't make the change - write yourself a note to try it later. The server is running now: no matter how trivial the change, you run a risk of this causing problems on restart or reboot. And if the restart isn't for months and/or is done by someone else, they will be confounded as to why a normal restart has left them with a broken server. I managed to do this to myself several times - and once or twice to other people. I stopped making untested changes to server configs, and I'm hoping you can benefit from the lesson(s) I learned.
It did remove one Python from my MM2 server:
Freshly restarted MM2 without NNTP:
Tasks: 8 (limit: 2256) Memory: 83.7M
With NNTP:
Tasks: 9 (limit: 2256) Memory: 93.9M
Hmmm - a 10MB difference for one process - NNTP gateway.
That cannot just be the Python script, it's gotta include 3.5MB Python2.7 (IMHO).
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
-- Giles https://www.gilesorr.com/ gilesorr@gmail.com

Giles Orr via talk wrote on 2025-02-10 16:33:
All of my previous experience managing servers has taught me to never do this: if you make a change, restart and test immediately. Or don't make the change - write yourself a note to try it later.
Good points. I'd lightly tested on another list and was kind of awaiting any feedback. There were lots of comments explaining the change, but I should've commented out the change until feedback came (or not). It has now been applied. We officially no longer have the code to bridge email <--> NNTP in RAM at all times. And, I see the last message was delivered, so I'd say it's working.
participants (7)
-
ac
-
Alvin Starr
-
D. Hugh Redelmeier
-
David Collier-Brown
-
Giles Orr
-
Ron / BCLUG
-
Steve Litt