From coff at tuhs.org Sun Nov 2 20:25:00 2025 From: coff at tuhs.org (Lars Brinkhoff via COFF) Date: Sun, 02 Nov 2025 10:25:00 +0000 Subject: [COFF] [TUHS] evolution of the cli In-Reply-To: (A. P. Garcia via TUHS's message of "Sat, 1 Nov 2025 10:59:21 -0400") References: Message-ID: <7wecqgu3yb.fsf@junk.nocrew.org> A. P. Garcia wrote: > The Evolution of the Command Line: From Terseness to Expression I wonder if you are also interested in some evolutionary dead ends? I'm thinking about the ITS DDT/HACTRN command line interface. It's even barely a command *line* because most commands are one or a few key strokes. It certainly fits your "from terseness" thesis. If you have used this type of user interface for a while, you may notice the fluidity, immediateness, almost subconscious transfer from thought to keystroke to action. I'd say this is something that may have been lost. But not entirely, because it still lives on in the form of Emacs. I'm rerouting this subthread away from TUHS to COFF. I have seen some hand-wringing arguments that "Emacs does too conform to Unix philosophy because ". I say no. Emacs does empathically not conform to Unix philosophy, and it doesn't have to. It very much conformss to the ITS philosophy of user interfaces. Digging deeper through the historical strata, we can find a whiff of the Stanford AI lab in the Emacs user interface. Namely the heavy use of modifiers: control, meta, and if you have them, super, hyper, greek. ITS natively used only control and added the Altmode - Escape - prefix for more commands. MIT imported the Stanford AI lab keyboard with more modifiers and made use of them in Emacs. The modifiers proliferated even more with the Lisp machines which expanded on the ITS user interface. From coff at tuhs.org Mon Nov 3 04:18:26 2025 From: coff at tuhs.org (Warner Losh via COFF) Date: Sun, 2 Nov 2025 11:18:26 -0700 Subject: [COFF] [TUHS] evolution of the cli In-Reply-To: <7wecqgu3yb.fsf@junk.nocrew.org> References: <7wecqgu3yb.fsf@junk.nocrew.org> Message-ID: On Sun, Nov 2, 2025 at 3:25 AM Lars Brinkhoff via COFF wrote: > A. P. Garcia wrote: > > The Evolution of the Command Line: From Terseness to Expression > > I wonder if you are also interested in some evolutionary dead ends? I'm > thinking about the ITS DDT/HACTRN command line interface. It's even > barely a command *line* because most commands are one or a few key > strokes. It certainly fits your "from terseness" thesis. > Yet there's another line of command lines to look at: TOPS-20 and VMS. These were quite verbose by the early 80s and provided a very rich grammar to use. These have gone the extreme other direction than ITS. Warner From coff at tuhs.org Sat Nov 8 11:48:06 2025 From: coff at tuhs.org (Dan Cross via COFF) Date: Fri, 7 Nov 2025 20:48:06 -0500 Subject: [COFF] [TUHS] Re: To NDEBUG or not to NDEBUG, that is the question In-Reply-To: References: Message-ID: On Fri, Nov 7, 2025 at 8:40 PM wrote: > Quoth Dan Cross : > > On Fri, Nov 7, 2025 at 1:54 PM Ori Bernstein wrote: > > > On Fri, 17 Oct 2025 08:22:23 -0400, Dan Cross via TUHS wrote: > > > > > > > One must question whether `assert` is the right thing or not, though; > > > > as an interface, it's pretty limited: a thing can either be true or > > > > not, but any surrounding information is not preserved; > > > > > > This is untrue -- with core dumps enabled, surrounding information is > > > preserved better than most other options. > > > > Apples and oranges. Presumably you're referring to a post-mortem > > analysis pointing a debugger at a core file and binary, but that vs > > the printed output from a failed `assert` is sufficiently dissimilar > > as to be specious. Furthermore, there's no guarantee that a failed > > `assert` will result in a core file being produced; production of core > > dumps can be disabled, but even if not, a process can catch `SIGABRT` > > and `longjmp` out of the handler; POSIX and C both explicitly allow > > for that. > > > > There is a reason people invent `ASSERT3x` macros for x in {U,P,I}, > > etc, and it's not just for kicks. And besides, as Arnold pointed out, > > it _can_ be done with `assert`: it's just ugly and painful. And of > > course, one can always use an explicit conditional, print whatever one > > likes, and call `abort()` directly if one wants (possibly resetting > > the signal handler to the default before-hand). You may still not get > > a core dump, but at least you can print whatever context you like. > > Yes, it's certainly possible for people to sabotage the usefulness > of a well placed abort. This is something I find frustrating when > debugging, because even the best stack traces lack a great deal of > information. [This is getting into COFF territory; TUHS to Bcc:] Not just users, but administrators, system policy and so forth; consider an `abort` in a setuid program. And of course, stack traces can be generated without the use of `abort()` or production of a core file; there are pre-canned libraries for pretty much all mainstream systems for doing that these days. Post mortem analysis is undeniably useful. But I maintain that it is _mostly_ orthogonal to `assert`. - Dan C. From coff at tuhs.org Sat Nov 8 13:12:20 2025 From: coff at tuhs.org (segaloco via COFF) Date: Sat, 08 Nov 2025 03:12:20 +0000 Subject: [COFF] [TUHS] Re: To NDEBUG or not to NDEBUG, that is the question In-Reply-To: References: Message-ID: On Friday, November 7th, 2025 at 17:48, Dan Cross via COFF wrote: > On Fri, Nov 7, 2025 at 8:40 PM ori at eigenstate.org wrote: > > > Quoth Dan Cross crossd at gmail.com: > > > > > On Fri, Nov 7, 2025 at 1:54 PM Ori Bernstein ori at eigenstate.org wrote: > > > > > > > On Fri, 17 Oct 2025 08:22:23 -0400, Dan Cross via TUHS tuhs at tuhs.org wrote: > > > > > > > > > One must question whether `assert` is the right thing or not, though; > > > > > as an interface, it's pretty limited: a thing can either be true or > > > > > not, but any surrounding information is not preserved; > > > > > > > > This is untrue -- with core dumps enabled, surrounding information is > > > > preserved better than most other options. > > > > > > Apples and oranges. Presumably you're referring to a post-mortem > > > analysis pointing a debugger at a core file and binary, but that vs > > > the printed output from a failed `assert` is sufficiently dissimilar > > > as to be specious. Furthermore, there's no guarantee that a failed > > > `assert` will result in a core file being produced; production of core > > > dumps can be disabled, but even if not, a process can catch `SIGABRT` > > > and `longjmp` out of the handler; POSIX and C both explicitly allow > > > for that. > > > > > > There is a reason people invent `ASSERT3x` macros for x in {U,P,I}, > > > etc, and it's not just for kicks. And besides, as Arnold pointed out, > > > it can be done with `assert`: it's just ugly and painful. And of > > > course, one can always use an explicit conditional, print whatever one > > > likes, and call `abort()` directly if one wants (possibly resetting > > > the signal handler to the default before-hand). You may still not get > > > a core dump, but at least you can print whatever context you like. > > > > Yes, it's certainly possible for people to sabotage the usefulness > > of a well placed abort. This is something I find frustrating when > > debugging, because even the best stack traces lack a great deal of > > information. > > > [This is getting into COFF territory; TUHS to Bcc:] > > Not just users, but administrators, system policy and so forth; > consider an `abort` in a setuid program. And of course, stack traces > can be generated without the use of `abort()` or production of a core > file; there are pre-canned libraries for pretty much all mainstream > systems for doing that these days. > > Post mortem analysis is undeniably useful. But I maintain that it is > mostly orthogonal to `assert`. > > - Dan C. In my mind, assert implies some knowledge of a failure condition to look for. Post mortem I usually find myself doing is to find a failure condition that I am not aware of. Once found, an assertion can be made as a regression test against the now-known failure condition, which can be omitted via NDEBUG. That feels clean to me at least. - Matt G. From coff at tuhs.org Mon Nov 10 17:45:05 2025 From: coff at tuhs.org (Dan Cross via COFF) Date: Mon, 10 Nov 2025 02:45:05 -0500 Subject: [COFF] [TUHS] Re: To NDEBUG or not to NDEBUG, that is the question In-Reply-To: References: Message-ID: On Sun, Nov 9, 2025 at 10:22 PM wrote: > Quoth Dan Cross : > > Post mortem analysis is undeniably useful. But I maintain that it is > > _mostly_ orthogonal to `assert`. > > What are you doing with the printed values of assert (or the > stack trace), other than post mortem analysis? That's reductive. Surely there is a qualitative difference between reading an error message and invoking a debugger, no? And as I said, there are instances where you `assert` and no core file (or broken process) to debug is produced. - Dan C. (And of course I must acknowledge that I did misread your earlier statement about stack traces being at times insufficient.) From coff at tuhs.org Mon Nov 10 18:31:15 2025 From: coff at tuhs.org (Bakul Shah via COFF) Date: Mon, 10 Nov 2025 00:31:15 -0800 Subject: [COFF] [TUHS] Re: To NDEBUG or not to NDEBUG, that is the question In-Reply-To: References: Message-ID: > On Nov 9, 2025, at 11:45 PM, Dan Cross via COFF wrote: > > On Sun, Nov 9, 2025 at 10:22 PM wrote: >> Quoth Dan Cross : >>> Post mortem analysis is undeniably useful. But I maintain that it is >>> _mostly_ orthogonal to `assert`. >> >> What are you doing with the printed values of assert (or the >> stack trace), other than post mortem analysis? > > That's reductive. Surely there is a qualitative difference between > reading an error message and invoking a debugger, no? And as I said, > there are instances where you `assert` and no core file (or broken > process) to debug is produced. > > - Dan C. > > (And of course I must acknowledge that I did misread your earlier > statement about stack traces being at times insufficient.) What I would like is to see on assert() failure is for the system to invoke a debugger, provided matching source can be found. But this requires compilers/linkers to *not* throw away information[1]. If a decent protocol is defined and appropriate access permissions are obtained, in theory a failure at a customer site can invoke the debugger at the developer site[2]. Then instead of an autopsy one can do a biopsy and may be even temporarily "cure" the patient! This can be useful when a system (or test) fails after many hours. [1] Would be nice to see C/C++/etc. compiled language tools to catch up to Lisp systems of the last century! [2] Dealing with leakage of customer/personal info is a separate issue but must be dealt with in any remote debugging protocol. From coff at tuhs.org Tue Nov 11 04:08:26 2025 From: coff at tuhs.org (Steffen Nurpmeso via COFF) Date: Mon, 10 Nov 2025 19:08:26 +0100 Subject: [COFF] [TUHS] Re: To NDEBUG or not to NDEBUG, that is the question In-Reply-To: References: Message-ID: <20251110180826.dA957V2G@steffen%sdaoden.eu> Bakul Shah via COFF wrote in : |> On Nov 9, 2025, at 11:45 PM, Dan Cross via COFF wrote: |> On Sun, Nov 9, 2025 at 10:22 PM wrote: |>> Quoth Dan Cross : |>>> Post mortem analysis is undeniably useful. But I maintain that it is |>>> _mostly_ orthogonal to `assert`. |>> |>> What are you doing with the printed values of assert (or the |>> stack trace), other than post mortem analysis? |> |> That's reductive. Surely there is a qualitative difference between |> reading an error message and invoking a debugger, no? And as I said, |> there are instances where you `assert` and no core file (or broken |> process) to debug is produced. |> |> - Dan C. |> |> (And of course I must acknowledge that I did misread your earlier |> statement about stack traces being at times insufficient.) | |What I would like is to see on assert() failure is for the system |to invoke a debugger, provided matching source can be found. But |this requires compilers/linkers to *not* throw away information[1]. | |If a decent protocol is defined and appropriate access permissions |are obtained, in theory a failure at a customer site can invoke |the debugger at the developer site[2]. Then instead of an autopsy |one can do a biopsy and may be even temporarily "cure" the patient! | |This can be useful when a system (or test) fails after many hours. | |[1] Would be nice to see C/C++/etc. compiled language tools to |catch up to Lisp systems of the last century! | |[2] Dealing with leakage of customer/personal info is a separate |issue but must be dealt with in any remote debugging protocol. Fwiw i totally disagree with any opinion who says that asserts shold remain in shipout code. For me there always has been debug- enabled developer-, and shipout code. The former goes many roads the latter will never see, for example format codec validates format string (not arguments though), getopt parser does this, and ensures long matches short equivalent etc, memory cache validates pointers before access, and all that. Except for the latter this is all developer only, but the latter should also not mean a thing in shipouts. For most of all that i even use preprocessor switches to avoid compilation overhead for users. What has not yet been mentioned at all is the runtime behavior difference in between debug and such optimized builds. This is a real problem. Especially so in true (let alone heavy) multithreading environments. In sofar i think the Salz' mentioned OSSL approach of having some kind of "verify" panicking or returning error is possibly best, but, i have not looked, even the different code layout (likely) resulting from that, ie, function call preparations, relative jump differences, different sizes of .RODATA etc, you know, could play a role. To me assertions are developer-only basic preconditions, which should never ever trigger in mature code. If there is only a slight change they could trigger, then regular error conditions are due. In fact i started to diversify my code a bit further after having seen that package maintainers sometimes enable debug code, resulting in development code paths to be included. (ASSERT is still based upon -DNDEBUG though). One maintainer (i am thankful for everyone who goes down that road!) of a distribution which only provides binaries now even explicitly uses git checkouts that include development cruft, even though the normal releases are based upon stripped such, for faster compilation, manual display, etc. That is to say that one should carefully take into account what could be done onto the software "downstream". For me all that will surely move further behind some "devel"opment curtain, not only "debug", or even only -DNDEBUG. I hate bugs, i hate all that, i do not want normal users to have a need to face such development mess. No. I mean, it is easy for OSSL, with their perl build environment, and they have the standing to simply say "that is unsupported". This will not work except with good will for most other projects. P.S.: i hate debuggers. In case of crash there are thread specific call graphs manages in software. Takes time, but gives a path over hundreds or more function calls. You say Potaetoe, and i say Potato. Maybe. --End of --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt) From coff at tuhs.org Tue Nov 11 09:17:32 2025 From: coff at tuhs.org (David Barto via COFF) Date: Mon, 10 Nov 2025 15:17:32 -0800 Subject: [COFF] [TUHS] Re: To NDEBUG or not to NDEBUG, that is the question In-Reply-To: <20251110180826.dA957V2G@steffen%sdaoden.eu> References: <20251110180826.dA957V2G@steffen%sdaoden.eu> Message-ID: At a company I worked for we caught any exception (OOM, SIGTERM, SIGHUP as examples) that would cause the application to exit. In the exception handler we wrote out 100’s of MB of state data of the program, including stack traces for all the threads (1000’s of those) along with data structures and anything else we could think of. (Memory allocation traces and queries that were running as examples). This was done with very carefully crafted code which could not call any other functions, nor allocate any memory. This was all written in a format that allowed us to load it into the same database in our office where we could then write queries against the data to see what happened and where the program was when it occurred. We called the data dump an 'x-ray' and the program that loaded it into the database and supported us examining the data ’the doctor’. A common thing to hear was “I’m running the doctor on an x-ray from customer ”, or “the X-ray showed that we designed the query wrong, it should have had a join which would reduce the memory footprint by N-GB” As far as post-mortem debugging it was an amazing environment and was exceptional at finding bugs in the code without having to use a standard debugger. No core files required.[1] It also let us ’Take an X-Ray’ of the running system while on the phone with the customer, allowing us to examine what was happening before they did “the next step” which would crash the system. David [1] - there were several users of the system who would not let a core file leave the building b/c of security. > On Nov 10, 2025, at 10:08 AM, Steffen Nurpmeso via COFF wrote: > > Bakul Shah via COFF wrote in > : > |> On Nov 9, 2025, at 11:45 PM, Dan Cross via COFF wrote: > |> On Sun, Nov 9, 2025 at 10:22 PM wrote: > |>> Quoth Dan Cross : > |>>> Post mortem analysis is undeniably useful. But I maintain that it is > |>>> _mostly_ orthogonal to `assert`. > |>> > |>> What are you doing with the printed values of assert (or the > |>> stack trace), other than post mortem analysis? > |> > |> That's reductive. Surely there is a qualitative difference between > |> reading an error message and invoking a debugger, no? And as I said, > |> there are instances where you `assert` and no core file (or broken > |> process) to debug is produced. > |> > |> - Dan C. > |> > |> (And of course I must acknowledge that I did misread your earlier > |> statement about stack traces being at times insufficient.) > | > |What I would like is to see on assert() failure is for the system > |to invoke a debugger, provided matching source can be found. But > |this requires compilers/linkers to *not* throw away information[1]. > | > |If a decent protocol is defined and appropriate access permissions > |are obtained, in theory a failure at a customer site can invoke > |the debugger at the developer site[2]. Then instead of an autopsy > |one can do a biopsy and may be even temporarily "cure" the patient! > | > |This can be useful when a system (or test) fails after many hours. > | > |[1] Would be nice to see C/C++/etc. compiled language tools to > |catch up to Lisp systems of the last century! > | > |[2] Dealing with leakage of customer/personal info is a separate > |issue but must be dealt with in any remote debugging protocol. > > Fwiw i totally disagree with any opinion who says that asserts > shold remain in shipout code. For me there always has been debug- > enabled developer-, and shipout code. > The former goes many roads the latter will never see, for > example format codec validates format string (not arguments > though), getopt parser does this, and ensures long matches short > equivalent etc, memory cache validates pointers before access, > and all that. Except for the latter this is all developer only, > but the latter should also not mean a thing in shipouts. > For most of all that i even use preprocessor switches to avoid > compilation overhead for users. > > What has not yet been mentioned at all is the runtime behavior > difference in between debug and such optimized builds. > This is a real problem. Especially so in true (let alone > heavy) multithreading environments. In sofar i think the Salz' > mentioned OSSL approach of having some kind of "verify" panicking > or returning error is possibly best, but, i have not looked, > even the different code layout (likely) resulting from that, > ie, function call preparations, relative jump differences, > different sizes of .RODATA etc, you know, could play a role. > To me assertions are developer-only basic preconditions, which > should never ever trigger in mature code. If there is only > a slight change they could trigger, then regular error conditions > are due. > > In fact i started to diversify my code a bit further after > having seen that package maintainers sometimes enable debug code, > resulting in development code paths to be included. (ASSERT is > still based upon -DNDEBUG though). One maintainer (i am thankful > for everyone who goes down that road!) of a distribution which > only provides binaries now even explicitly uses git checkouts > that include development cruft, even though the normal releases > are based upon stripped such, for faster compilation, manual > display, etc. > > That is to say that one should carefully take into account what > could be done onto the software "downstream". > For me all that will surely move further behind some "devel"opment > curtain, not only "debug", or even only -DNDEBUG. I hate bugs, > i hate all that, i do not want normal users to have a need to face > such development mess. No. > > I mean, it is easy for OSSL, with their perl build environment, > and they have the standing to simply say "that is unsupported". > This will not work except with good will for most other projects. > > P.S.: i hate debuggers. In case of crash there are thread > specific call graphs manages in software. Takes time, but gives > a path over hundreds or more function calls. > You say Potaetoe, and i say Potato. Maybe. > > --End of > > --steffen > | > |Der Kragenbaer, The moon bear, > |der holt sich munter he cheerfully and one by one > |einen nach dem anderen runter wa.ks himself off > |(By Robert Gernhardt) "Nature doesn't care how smart you are. You can still be wrong." - Richard Feynman David Barto barto at kdbarto.org