Discussion:
[dev] Checksums and Sig files for release gzip
Sagar Acharya
2021-04-13 11:34:45 UTC
Permalink
Can we have SHA512 checksums and sig files for the release gzips of suckless software?

Thanking you
Sagar Acharya
https://designman.org
Daniel Cegiełka
2021-04-13 14:45:07 UTC
Permalink
How/where SHA512 is better than SHA256 or SHA1? I don't see any added
value in this. If someone breaks into your server and replace files,
may also regenerate check sums (SHA256/512 or SHA3, scrypt etc.). The
use of MD5 will be equally (un)safe as SHA512 :)

A better solution is e.g. signify from OpenBSD or GnuPG.

https://man.openbsd.org/signify

Daniel
Post by Sagar Acharya
Can we have SHA512 checksums and sig files for the release gzips of suckless software?
Thanking you
Sagar Acharya
https://designman.org
Sagar Acharya
2021-04-13 14:57:39 UTC
Permalink
Sure, any good signature. SHA512 is stronger than SHA1, MD5 and SHA256. It shouldn't take a second more than others. Why use a weaker checksum?
Thanking you
Sagar Acharya
https://designman.org
Post by Daniel Cegiełka
How/where SHA512 is better than SHA256 or SHA1? I don't see any added
value in this. If someone breaks into your server and replace files,
may also regenerate check sums (SHA256/512 or SHA3, scrypt etc.). The
use of MD5 will be equally (un)safe as SHA512 :)
A better solution is e.g. signify from OpenBSD or GnuPG.
https://man.openbsd.org/signify
Daniel
Post by Sagar Acharya
Can we have SHA512 checksums and sig files for the release gzips of suckless software?
Thanking you
Sagar Acharya
https://designman.org
Mattias Andrée
2021-04-13 15:08:31 UTC
Permalink
On Tue, 13 Apr 2021 16:57:39 +0200
Post by Sagar Acharya
Sure, any good signature. SHA512 is stronger than SHA1, MD5 and SHA256. It shouldn't take a second more than others. Why use a weaker checksum?
SHA512 is actually more than twice as fast as SHA256 on 64-bit machines.
(I don't know which is stronger).

I see no point in having checksums at all, except for detecting bitrot.
Signatures are of course good.
Post by Sagar Acharya
Thanking you
Sagar Acharya
https://designman.org
Post by Daniel Cegiełka
How/where SHA512 is better than SHA256 or SHA1? I don't see any added
value in this. If someone breaks into your server and replace files,
may also regenerate check sums (SHA256/512 or SHA3, scrypt etc.). The
use of MD5 will be equally (un)safe as SHA512 :)
A better solution is e.g. signify from OpenBSD or GnuPG.
https://man.openbsd.org/signify
Daniel
Post by Sagar Acharya
Can we have SHA512 checksums and sig files for the release gzips of suckless software?
Thanking you
Sagar Acharya
https://designman.org
Markus Wichmann
2021-04-13 18:17:37 UTC
Permalink
Post by Mattias Andrée
On Tue, 13 Apr 2021 16:57:39 +0200
Post by Sagar Acharya
Sure, any good signature. SHA512 is stronger than SHA1, MD5 and SHA256. It shouldn't take a second more than others. Why use a weaker checksum?
SHA512 is actually more than twice as fast as SHA256 on 64-bit machines.
(I don't know which is stronger).
Y'know, while we're bikeshedding, why not just use SHA-3? Keccak has
been out for a while now, and it is also available in 256 and 512 bit
variants. I keep wondering why people keep using SHA-2 variants. Do you
want to wait until it is cracked?

SHA-3 would have the benefit of always being a 64-bit algorithm (unlike
SHA-2, which is 32-bit in the 192 and 256 bit variants, and 64-bit in
the 384 and 512 bit variants, necessitating two very similar processing
functions in C). Its design also makes HMAC easier, though this is not
of import for this application.
Post by Mattias Andrée
I see no point in having checksums at all, except for detecting bitrot.
Signatures are of course good.
Signatures only help if you have a known-good public key. Anyone can
create a key and claim it belongs to, say, Barack Obama. I have no
public key of anyone affiliated with suckless, and no way to verify if
any key I get off of a keyserver is actually one of theirs.

Security is hard.

Ciao,
Markus
Mattias Andrée
2021-04-13 18:48:15 UTC
Permalink
On Tue, 13 Apr 2021 20:17:37 +0200
Post by Markus Wichmann
Post by Mattias Andrée
On Tue, 13 Apr 2021 16:57:39 +0200
Post by Sagar Acharya
Sure, any good signature. SHA512 is stronger than SHA1, MD5 and SHA256. It shouldn't take a second more than others. Why use a weaker checksum?
SHA512 is actually more than twice as fast as SHA256 on 64-bit machines.
(I don't know which is stronger).
Y'know, while we're bikeshedding, why not just use SHA-3? Keccak has
been out for a while now, and it is also available in 256 and 512 bit
variants. I keep wondering why people keep using SHA-2 variants. Do you
want to wait until it is cracked?
I use SHA-3 :) But interesting, even though Keccak (from which SHA-3 is
derived) won over BLAKE2, BLAKE2 seems to be more popular.
Post by Markus Wichmann
SHA-3 would have the benefit of always being a 64-bit algorithm (unlike
SHA-2, which is 32-bit in the 192 and 256 bit variants, and 64-bit in
the 384 and 512 bit variants, necessitating two very similar processing
functions in C).
SHA-3 may be 64-bit, it's just a set of four special configurations of
Keccak which does not have restriction at all, which complicates the
algorithm. Just like you would just choose SHA-3 and not Keccak, and
one specific version of it, you would only choose one specific version
of SHA-2, so if you only implement that version, you can get rid of these
complexities. However, in the real world applications would implement
all, or at least four, of the SHA-2 versions, which only require two
distinct, simple implementations. With SHA-3, you can get rid of some
complexity by restricting the implementation to SHA-3, but wouldn't
you implement it via Keccak, so you easily can implement all variants
of Keccak? (When I implemented sha3sum, SHA-3 was not defined yet, we
only had Keccak, so I had to implement it with all those complexities,
then I just left it when SHA-3 was finalised, so it could support more
hashing algorithms.)
Post by Markus Wichmann
Its design also makes HMAC easier, though this is not
of import for this application.
Post by Mattias Andrée
I see no point in having checksums at all, except for detecting bitrot.
Signatures are of course good.
Signatures only help if you have a known-good public key. Anyone can
create a key and claim it belongs to, say, Barack Obama. I have no
public key of anyone affiliated with suckless, and no way to verify if
any key I get off of a keyserver is actually one of theirs.
That's were the idea of web of trust comes in. During slcon, we can
have key signing parties. Then other people can sign our keys, and
eventually there a chain from someone you trust to the suckless
developers. Additionally, the developers can host their signed keys
on other websites, including their own. Then, if you get them of multiple
servers, including well-known ones, they are fairly trustable.
Post by Markus Wichmann
Security is hard.
Ciao,
Markus
Sergey Matveev
2021-04-13 19:21:50 UTC
Permalink
Post by Mattias Andrée
But interesting, even though Keccak (from which SHA-3 is
derived) won over BLAKE2, BLAKE2 seems to be more popular.
Keccak won over "BLAKE". "BLAKE2" is reduced-round tweaked "BLAKE" version.
BLAKE2 is very fast, having very high security margin and abilities to
use it as a MAC, add randomization/personalization -- that it why it is
popular.
--
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263 6422 AE1A 8109 E498 57EF
Daniel Cegiełka
2021-04-13 19:43:51 UTC
Permalink
Post by Sergey Matveev
Post by Mattias Andrée
But interesting, even though Keccak (from which SHA-3 is
derived) won over BLAKE2, BLAKE2 seems to be more popular.
Keccak won over "BLAKE". "BLAKE2" is reduced-round tweaked "BLAKE" version.
BLAKE2 is very fast, having very high security margin and abilities to
use it as a MAC, add randomization/personalization -- that it why it is
popular.
BLAKE2 is a weakened version of BLAKE. The goal was different: files
checksums. I was using BLAKE2 in my mtree port, but output was
truncated to 24 bytes (it can be done in BLAKE2).
Post by Sergey Matveev
--
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263 6422 AE1A 8109 E498 57EF
Sergey Matveev
2021-04-13 18:58:56 UTC
Permalink
Post by Markus Wichmann
Y'know, while we're bikeshedding, why not just use SHA-3?
Answer is: https://www.imperialviolet.org/2017/05/31/skipsha3.html
and answer for that: https://cryptologie.net/article/400/maybe-you-shouldnt-skip-sha-3/
SHA3 is good, but "offers no compelling advantage over SHA2 and brings
many costs". SHA2 is not so bad. Personally I tend to use neither SHA2,
nor SHA3, but BLAKE2b (in 64-bit CPUs it is even faster than MD5, with
huge security margin), or Skein. KangarooTwelve (reduced-round
parallelized SHA3) will outperform all of them, but BLAKE3 beats it. And
SHA512 is preferable SHA256, mostly because it is faster in 64-bit CPUs.
--
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263 6422 AE1A 8109 E498 57EF
Markus Wichmann
2021-04-14 04:03:42 UTC
Permalink
Post by Sergey Matveev
Post by Markus Wichmann
Y'know, while we're bikeshedding, why not just use SHA-3?
Answer is: https://www.imperialviolet.org/2017/05/31/skipsha3.html
I don't care about the speed of a hash function. Speed of a hash
function matters only in two cases: Doing lots of hashing (e.g. password
cracking or bitcoin mining), or hashing large files. I don't hash large
files often enough for speed to matter, I think bitcoin mining is
pollution, and in case of password cracking, having a slower hash
function is an advantage for me, as I would be on the side of the
defenders.
Post by Sergey Matveev
and answer for that: https://cryptologie.net/article/400/maybe-you-shouldnt-skip-sha-3/
SHA3 is good, but "offers no compelling advantage over SHA2 and brings
many costs". SHA2 is not so bad.
I am not a cryptographer. From what I understand about SHA-3, it offers
a better HMAC function (the whole padding thing is not needed anymore,
since hash extension attacks are not possible).

I am dependent on the advice of cryptographers for the selection of
hashing algorithms. Cryptographers had a big old competition over the
"best" hashing algorithm (and I realize that multidimensional
optimization is, in general, impossible), and in 2012, Keccak (in a
64-bit variant) won. Now of course, since then, nine years have passed,
and newer developments have not seen such a competition. But I lack the
skills to evaluate any of the other possibilities for anything except
speed, which is the one thing I don't care about. So until SHA-4 comes
along, or another comparable competition, I will stick to SHA-3.

And I will continue to advocate for its use exclusively over SHA-2 to
keep the zoo of hash functions small. SHA-3 should be used for its HMAC
property alone, and it is adequate for all other tasks, so there is also
no reason to keep SHA-2 around.

Ciao,
Markus
Sergey Matveev
2021-04-14 06:05:01 UTC
Permalink
Post by Markus Wichmann
I don't care about the speed of a hash function.
If we a talking here about checking software integrity, then speed is
important. Millions of people check the hash of downloaded files -- if
it is slow, then huge quantity of time/energy is wasted. Less time you
spent on hashing, less energy is wasted. SHA2 (and SHA3 too, if we are
not talking about KangarooTwelve modifications) is the worst choice from
ecology point of view.
Post by Markus Wichmann
I think bitcoin mining is pollution
I agreed. But I see nothing in common between proof-of-works and hash
functions. PoWs ("good one", like Argon2 that can be used for that task)
uses special construction -- it does not matter if underlying hash is
fast or slow, because we can simply make more iterations with it.
Post by Markus Wichmann
and in case of password cracking, having a slower hash
function is an advantage for me
That can only mean that you still use an ancient PBKDF2-like schemes of
password strengthening. A long time ago a Password Hashing Competition
brought us memory-hardened hashing functions like Argon2 (winner), and
Balloon (appeared after PHC, but my favourite). Hash function speed do
not play any considerable role there, because memory is actively used
and *is* the bottleneck for brute forcing operation. Anyway, slower hash
for PBKDF2 means that number of iterations will be smaller -- faster
hash means more number of iterations. So only the dead simple password
hashing constructions like hash(password) will benefit from slower hash,
that is just silly and unacceptable to use at all, if you worry about
brute-force cracking.
Post by Markus Wichmann
I am not a cryptographer. From what I understand about SHA-3, it offers
a better HMAC function
1) Do not confuse "MAC" and "HMAC". HMAC is a special construction
(H(K XOR opad) || H((K XOR ipad) || m)) that can make a MAC with hash
functions. It is required at least because many hash functions are
constructed as Merkle–Damgård, that has some properties preventing
simple H(K || m) usage.

2) SHA3 is not Merkle–Damgård and can be safely used as a MAC with just
H(K || m) calculation. HMAC can be used with SHA3 without any problems,
but it just calls hash function one more time. For big messages that
does not play any noticeable role at all (hashing of terabyte and one
more hash of dozens of bytes), but for small one "native" SHA3-MAC just
will be faster.

3) "Native" SHA3-MAC is not better. It is just the same, from security
point of view. Nothing wrong with HMAC, nothing wrong with SHA3-MAC.
Latter will be just faster especially for small messages.
Post by Markus Wichmann
(the whole padding thing is not needed anymore,
since hash extension attacks are not possible).
Yes, HMAC prevents them. SHA3 is simply immune for them out of box,
because of its sponge construction. Nothing is wrong with both of them,
noone is better.
Post by Markus Wichmann
Cryptographers had a big old competition over the
"best" hashing algorithm and in 2012, Keccak won.
There are too many questions what is "best". Keccak won mainly because
it is not Merkle-Damgård construction that ****probably**** someday can
be found to be problematic with more issues. ****Possibly**** that
construction itself was a mistake. SHA3 is a ready replacement **if**
something is wrong with SHA2. Official statements require to replace MD5
and SHA1 with SHA2+. But there are no statements and recommendation to
replace SHA2 with SHA3, because nothing is wrong with SHA2.
Post by Markus Wichmann
And I will continue to advocate for its use exclusively over SHA-2 to
keep the zoo of hash functions small. SHA-3 should be used for its HMAC
property alone, and it is adequate for all other tasks, so there is also
no reason to keep SHA-2 around.
Nothing wrong to use SHA3. But it is wrong to say that SHA2 is somehow
bad and must be replaced. That is why even newer creations, knowing that
SHA3 is already here for a long time, still choosing SHA2 exactly for
keeping the zoo of hash functions small, because noone is going to
replace already existing SHA2-driven software with SHA3. Why Git is not
moving to SHA3 from SHA1? Because there is no point in that, nothing is
wrong with SHA2 existing in every piece of libraries.

But anyway there is no possibility just to stay with only SHA2 or only
SHA3. Because all of them are slow. I use hash alone for verifying data
integrity and I need for speed. That is why BLAKE2/Skein are so
popular and BLAKE3 is gaining popularity too. And as I remember, Skein
even has security marging higher than Keccak. BLAKE2 has completely
acceptable margin for all cryptographic tasks too. Possibly KangarooTwelve
will gain popularity too. But zoo will always be with us.

And do not overestimate importance of having MAC function alone.
Currently all protocols are moving to AEAD-based ciphersuites (TLS1.3
and Noise work only with AEAD ones), where MAC is not used alone anymore.
CCM and EAX AEAD modes uses only the cipher function, GCM uses GHASH
(not a hash, not a cipher), ChaCha20-Poly1305 uses Poly1305 which is
special onetime MAC function, but none of them use anything related to
hash functions or HMACs. Of course there are hundreds of places with
MAC-alone-usage exists, but not as dozens years ago where it applied to
every IPsec/TLS/whatever packet to authenticate it.

And if you protocol assumes that various hashes can be used with it
(Merkle-Damgård-based or anything else), then you are forced to use
and rememeber about HMAC. You can tell to use SHA3 directly, if SHA3 is
used as a hash, but it is complication. That is why modern TLS1.3 or
Noise still use HMAC, even if SHA3 is used with them. There was
discussion about that in Noise mailllist, because even BLAKE2 offers the
same ability to use it as a MAC directly, Skein too (all of them are
widely used), but decisions is to leave HMAC anyway, for protocol
simplification. So it is hard to see where native-SHA3-MAC can be used
in practice in protocols without hard-coded algorithms. Nothing wrong
with HMAC-SHA3, except for one more small message hashing at the end,
that is negligible and won't be used for transport traffic because of
AEAD ciphers.
--
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263 6422 AE1A 8109 E498 57EF
Daniel Cegiełka
2021-04-14 06:31:48 UTC
Permalink
Sergey - nice summary. Let me just add that there are more uses and
aspects that should be taken into account.

Passwords:
- cpu time vs memory usage vs parallel computation - it is difficult
to address everything with one function, but yescrypt:
https://www.openwall.com/yescrypt/
- side-channel attacks - strong point of Argon2i, and weak of scrypt
or bcrypt. It is a problem if another application on your phone can
catch your password

Integrity:
- network communication - speed is important here (plus for BLAKE2 or BLAKE3)
- IDS (eg LKRG) - again, speed is very important here, but the feature
also needs to provide some level of security. Here is a plus for
SipHash:

https://github.com/openwall/lkrg/blob/main/src/modules/hashing/p_lkrg_fast_hash.c

So it's hard to find a single hash function that addresses all these
areas. And that's what they are trying to do with SHA3.

Daniel
Post by Sergey Matveev
Post by Markus Wichmann
I don't care about the speed of a hash function.
If we a talking here about checking software integrity, then speed is
important. Millions of people check the hash of downloaded files -- if
it is slow, then huge quantity of time/energy is wasted. Less time you
spent on hashing, less energy is wasted. SHA2 (and SHA3 too, if we are
not talking about KangarooTwelve modifications) is the worst choice from
ecology point of view.
Post by Markus Wichmann
I think bitcoin mining is pollution
I agreed. But I see nothing in common between proof-of-works and hash
functions. PoWs ("good one", like Argon2 that can be used for that task)
uses special construction -- it does not matter if underlying hash is
fast or slow, because we can simply make more iterations with it.
Post by Markus Wichmann
and in case of password cracking, having a slower hash
function is an advantage for me
That can only mean that you still use an ancient PBKDF2-like schemes of
password strengthening. A long time ago a Password Hashing Competition
brought us memory-hardened hashing functions like Argon2 (winner), and
Balloon (appeared after PHC, but my favourite). Hash function speed do
not play any considerable role there, because memory is actively used
and *is* the bottleneck for brute forcing operation. Anyway, slower hash
for PBKDF2 means that number of iterations will be smaller -- faster
hash means more number of iterations. So only the dead simple password
hashing constructions like hash(password) will benefit from slower hash,
that is just silly and unacceptable to use at all, if you worry about
brute-force cracking.
Post by Markus Wichmann
I am not a cryptographer. From what I understand about SHA-3, it offers
a better HMAC function
1) Do not confuse "MAC" and "HMAC". HMAC is a special construction
(H(K XOR opad) || H((K XOR ipad) || m)) that can make a MAC with hash
functions. It is required at least because many hash functions are
constructed as Merkle–Damgård, that has some properties preventing
simple H(K || m) usage.
2) SHA3 is not Merkle–Damgård and can be safely used as a MAC with just
H(K || m) calculation. HMAC can be used with SHA3 without any problems,
but it just calls hash function one more time. For big messages that
does not play any noticeable role at all (hashing of terabyte and one
more hash of dozens of bytes), but for small one "native" SHA3-MAC just
will be faster.
3) "Native" SHA3-MAC is not better. It is just the same, from security
point of view. Nothing wrong with HMAC, nothing wrong with SHA3-MAC.
Latter will be just faster especially for small messages.
Post by Markus Wichmann
(the whole padding thing is not needed anymore,
since hash extension attacks are not possible).
Yes, HMAC prevents them. SHA3 is simply immune for them out of box,
because of its sponge construction. Nothing is wrong with both of them,
noone is better.
Post by Markus Wichmann
Cryptographers had a big old competition over the
"best" hashing algorithm and in 2012, Keccak won.
There are too many questions what is "best". Keccak won mainly because
it is not Merkle-Damgård construction that ****probably**** someday can
be found to be problematic with more issues. ****Possibly**** that
construction itself was a mistake. SHA3 is a ready replacement **if**
something is wrong with SHA2. Official statements require to replace MD5
and SHA1 with SHA2+. But there are no statements and recommendation to
replace SHA2 with SHA3, because nothing is wrong with SHA2.
Post by Markus Wichmann
And I will continue to advocate for its use exclusively over SHA-2 to
keep the zoo of hash functions small. SHA-3 should be used for its HMAC
property alone, and it is adequate for all other tasks, so there is also
no reason to keep SHA-2 around.
Nothing wrong to use SHA3. But it is wrong to say that SHA2 is somehow
bad and must be replaced. That is why even newer creations, knowing that
SHA3 is already here for a long time, still choosing SHA2 exactly for
keeping the zoo of hash functions small, because noone is going to
replace already existing SHA2-driven software with SHA3. Why Git is not
moving to SHA3 from SHA1? Because there is no point in that, nothing is
wrong with SHA2 existing in every piece of libraries.
But anyway there is no possibility just to stay with only SHA2 or only
SHA3. Because all of them are slow. I use hash alone for verifying data
integrity and I need for speed. That is why BLAKE2/Skein are so
popular and BLAKE3 is gaining popularity too. And as I remember, Skein
even has security marging higher than Keccak. BLAKE2 has completely
acceptable margin for all cryptographic tasks too. Possibly KangarooTwelve
will gain popularity too. But zoo will always be with us.
And do not overestimate importance of having MAC function alone.
Currently all protocols are moving to AEAD-based ciphersuites (TLS1.3
and Noise work only with AEAD ones), where MAC is not used alone anymore.
CCM and EAX AEAD modes uses only the cipher function, GCM uses GHASH
(not a hash, not a cipher), ChaCha20-Poly1305 uses Poly1305 which is
special onetime MAC function, but none of them use anything related to
hash functions or HMACs. Of course there are hundreds of places with
MAC-alone-usage exists, but not as dozens years ago where it applied to
every IPsec/TLS/whatever packet to authenticate it.
And if you protocol assumes that various hashes can be used with it
(Merkle-Damgård-based or anything else), then you are forced to use
and rememeber about HMAC. You can tell to use SHA3 directly, if SHA3 is
used as a hash, but it is complication. That is why modern TLS1.3 or
Noise still use HMAC, even if SHA3 is used with them. There was
discussion about that in Noise mailllist, because even BLAKE2 offers the
same ability to use it as a MAC directly, Skein too (all of them are
widely used), but decisions is to leave HMAC anyway, for protocol
simplification. So it is hard to see where native-SHA3-MAC can be used
in practice in protocols without hard-coded algorithms. Nothing wrong
with HMAC-SHA3, except for one more small message hashing at the end,
that is negligible and won't be used for transport traffic because of
AEAD ciphers.
--
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263 6422 AE1A 8109 E498 57EF
Mattias Andrée
2021-04-17 15:42:50 UTC
Permalink
On Sat, 17 Apr 2021 16:30:15 +0200
On Wed, 14 Apr 2021 09:05:01 +0300
Dear Sergey,
Post by Sergey Matveev
If we a talking here about checking software integrity, then speed is
important. Millions of people check the hash of downloaded files -- if
it is slow, then huge quantity of time/energy is wasted. Less time you
spent on hashing, less energy is wasted. SHA2 (and SHA3 too, if we are
not talking about KangarooTwelve modifications) is the worst choice
from ecology point of view.
we would save much more energy by banning autohell, Rust, bloated
I've completely ignored Rust. What's the problem with it?
electron-apps and Qt. Especially autohell is really a huge waste of
time and energy, and I often find that packages take longer to
"configure" (what for?) than to actually compile. Never has configure
ever helped me; it always stood in the way, e.g. when GHC added a
warning a few months ago, breaking all autoconf-checks who assumed that
any output from the compiler was an error.
With best regards
Laslo
Sergey Matveev
2021-04-17 16:42:20 UTC
Permalink
in regard to my argument: It has abysmal compile times and the compiler
is extremely bloated.
Also it has bootstrap problem: officially there is no way to build Rust,
except for downloading some binaries for you platform from the Internet.
LLVM/Clang, GCC -- all of them can be compiled with more older GCC, tcc
and whatever C-compilers: GNU Guix with GNU Mes bootstraps C/C++-ecosystem
that way. But Rust developers... do not bother -- just shut up and
download our binaries.

There is mrustc project: Rust written on C++, that can be used to build
Rust itself. But it is just a side project, not official. 16GB of RAM
was not enough at all (constant swapping) and I borrowed 32-cores 2-CPUs
Xeon system with 128GB of RAM just to try to build mrustc with several
versions of Rust (mrust can build rust 1.29, that can build 1.30, that
can build 1.31, and so on). I succeeded on Devuan, with taking more than
50GB of diskspace. Could not build it on FreeBSD. So personally even if
I wanted to try Rust, I just have no such powerful hardware for its
bootstrapping and knowledge how to build mrustc on FreeBSD.
--
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263 6422 AE1A 8109 E498 57EF
Mattias Andrée
2021-04-17 16:57:09 UTC
Permalink
This self-hosted nonsense is ludicrous. It's understandable for C compilers,
it's an old language that everyone has a compiler for and there are many
implementations, and even if you wrote it in assembly, you will just shift
the problem to the assembler. So there must be one blessed language, and
C and C++ are good options. But cannot you just download an older version
of the compiler that's presumably written in C or C++, and compile the
newest version with that version, or have they not published one, or would
you need to do it in multiple steps due to frequent language changes?

Is a compiler really open source it in cannot be compiled with another
compiler?


On Sat, 17 Apr 2021 19:42:20 +0300
Post by Sergey Matveev
in regard to my argument: It has abysmal compile times and the compiler
is extremely bloated.
Also it has bootstrap problem: officially there is no way to build Rust,
except for downloading some binaries for you platform from the Internet.
LLVM/Clang, GCC -- all of them can be compiled with more older GCC, tcc
and whatever C-compilers: GNU Guix with GNU Mes bootstraps C/C++-ecosystem
that way. But Rust developers... do not bother -- just shut up and
download our binaries.
There is mrustc project: Rust written on C++, that can be used to build
Rust itself. But it is just a side project, not official. 16GB of RAM
was not enough at all (constant swapping) and I borrowed 32-cores 2-CPUs
Xeon system with 128GB of RAM just to try to build mrustc with several
versions of Rust (mrust can build rust 1.29, that can build 1.30, that
can build 1.31, and so on). I succeeded on Devuan, with taking more than
50GB of diskspace. Could not build it on FreeBSD. So personally even if
I wanted to try Rust, I just have no such powerful hardware for its
bootstrapping and knowledge how to build mrustc on FreeBSD.
Sergey Matveev
2021-04-17 17:50:50 UTC
Permalink
Post by Mattias Andrée
This self-hosted nonsense is ludicrous.
Not agree.
Post by Mattias Andrée
It's understandable for C compilers
Rust, as far as I heard/remember, was written on OCaml, that itself was
also written on some C -- so nothing prevents its bootstrapping too, unless
its authors thought about that. Shame on them.
--
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263 6422 AE1A 8109 E498 57EF
Mattias Andrée
2021-04-17 18:08:06 UTC
Permalink
On Sat, 17 Apr 2021 20:50:50 +0300
Post by Sergey Matveev
Post by Mattias Andrée
This self-hosted nonsense is ludicrous.
Not agree.
Post by Mattias Andrée
It's understandable for C compilers
Rust, as far as I heard/remember, was written on OCaml, that itself was
also written on some C -- so nothing prevents its bootstrapping too, unless
its authors thought about that. Shame on them.
No one has an OCaml compiler. If I'm going to write a compiler, I'm going to
write it in C, even if the language is C, because evenone has a C compiler.
I'm not going to use an older language just because it is older, I'm going
to use the most common language. On they that could be Rust, and then, the
it would be OK to write a rust compiler in Rust.
Sergey Matveev
2021-04-17 18:30:58 UTC
Permalink
Post by Mattias Andrée
No one has an OCaml compiler.
Same applies to Rust.
And to Go too, but it is easy bootstrappable with the C compiler, taking
just several minutes on modest hardware. Rust is like a JavaScript: just
download it and run, because it is seems so convenient modern days.
Post by Mattias Andrée
If I'm going to write a compiler, I'm going to write it in C
That is good. And nearly everyone does so, or use at least something
that can be build with C-compiler.
--
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263 6422 AE1A 8109 E498 57EF
Mattias Andrée
2021-04-17 18:38:51 UTC
Permalink
On Sat, 17 Apr 2021 21:30:58 +0300
Post by Sergey Matveev
Post by Mattias Andrée
No one has an OCaml compiler.
Same applies to Rust.
And to Go too, but it is easy bootstrappable with the C compiler, taking
just several minutes on modest hardware. Rust is like a JavaScript: just
download it and run, because it is seems so convenient modern days.
Post by Mattias Andrée
If I'm going to write a compiler, I'm going to write it in C
That is good. And nearly everyone does so, or use at least something
that can be build with C-compiler.
Yes, one extra step is acceptable, as long as you have a way,
that isn't too long, to get there from some common starting
point. Self-hosted is a problem, but if you host a
non-self-hosted versions that can be used to compile the
self-hosted one, it is also acceptable, but you should not
have to manually look for through old releases to find a
non-self-hosted version.
Mattias Andrée
2021-04-17 18:41:14 UTC
Permalink
On Sat, 17 Apr 2021 20:38:51 +0200
Post by Mattias Andrée
On Sat, 17 Apr 2021 21:30:58 +0300
Post by Sergey Matveev
Post by Mattias Andrée
No one has an OCaml compiler.
Same applies to Rust.
And to Go too, but it is easy bootstrappable with the C compiler, taking
just several minutes on modest hardware. Rust is like a JavaScript: just
download it and run, because it is seems so convenient modern days.
Post by Mattias Andrée
If I'm going to write a compiler, I'm going to write it in C
That is good. And nearly everyone does so, or use at least something
that can be build with C-compiler.
Yes, one extra step is acceptable, as long as you have a way,
that isn't too long, to get there from some common starting
point. Self-hosted is a problem, but if you host a
non-self-hosted versions that can be used to compile the
self-hosted one, it is also acceptable, but you should not
have to manually look for through old releases to find a
non-self-hosted version.
So basically, if you are going to make a self-hosted compiler
for your own language, you have to publish the last version
that wasn't self-hosted alongside the self-hosted version,
and then make sure all new versions are compilable with that
version.
Sergey Matveev
2021-04-17 19:14:57 UTC
Permalink
Post by Mattias Andrée
you have to publish the last version
that wasn't self-hosted alongside the self-hosted version,
Exactly! And my critique of Rust is that they have not bothered done
that way, that is just an unacceptable (for me) careless work. Go as
a comparison: Go 1.4 is written on C and all future Go versions can
use it for compilation of themselves.
--
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263 6422 AE1A 8109 E498 57EF
Greg Reagle
2021-04-19 20:18:47 UTC
Permalink
Anyway, I can't say it enough: Check out Ada 2012 (and the SPARK
subset) if you care about "secure" languages. It's not as lean as C, but
you end up solving so many problems with it, especially in regard to
software engineering and safety.
Okay, I did. Very interesting. I briefly studied Ada many years ago. Do you think that Ada is a viable alternative to Rust? Do you think it is a decent alternative to C for things like operating systems or utilities like sbase or ubase?

I made a Hello World program in Ada. Very fast and small. However, it depends on libgnat-8.so.1. Is there a way to build it so that it does not? Like statically linked?
Alessandro Pistocchi
2021-04-20 07:52:37 UTC
Permalink
Hi,

Sorry to step in but I find this conversation very interesting :-)

I don’t know much about ADA and would like to know a bit more, especially
now that I see it could be a viable alternative to rust.

I am not too keen on rust, on the other hand I like the idea of doing system
programming with memory safety across multiple cores.

Is there any comparison of ADA and rust that I can read about?

Thanks,
A
Post by Greg Reagle
Anyway, I can't say it enough: Check out Ada 2012 (and the SPARK
subset) if you care about "secure" languages. It's not as lean as C, but
you end up solving so many problems with it, especially in regard to
software engineering and safety.
Okay, I did. Very interesting. I briefly studied Ada many years ago. Do you think that Ada is a viable alternative to Rust? Do you think it is a decent alternative to C for things like operating systems or utilities like sbase or ubase?
I made a Hello World program in Ada. Very fast and small. However, it depends on libgnat-8.so.1. Is there a way to build it so that it does not? Like statically linked?
Greg Reagle
2021-04-19 20:19:18 UTC
Permalink
Anyway, I can't say it enough: Check out Ada 2012 (and the SPARK
subset) if you care about "secure" languages. It's not as lean as C, but
you end up solving so many problems with it, especially in regard to
software engineering and safety.
Okay, I did. Very interesting. I briefly studied Ada many years ago. Do you think that Ada is a viable alternative to Rust? Do you think it is a decent alternative to C for things like operating systems or utilities like sbase or ubase?

I made a Hello World program in Ada. Very fast and small. However, it depends on libgnat-8.so.1. Is there a way to build it so that it does not? Like statically linked?
Mattias Andrée
2021-04-19 20:36:42 UTC
Permalink
On Mon, 19 Apr 2021 16:19:18 -0400
Post by Greg Reagle
Anyway, I can't say it enough: Check out Ada 2012 (and the SPARK
subset) if you care about "secure" languages. It's not as lean as C, but
you end up solving so many problems with it, especially in regard to
software engineering and safety.
Okay, I did. Very interesting. I briefly studied Ada many years ago. Do you think that Ada is a viable alternative to Rust? Do you think it is a decent alternative to C for things like operating systems or utilities like sbase or ubase?
I made a Hello World program in Ada. Very fast and small. However, it depends on libgnat-8.so.1. Is there a way to build it so that it does not? Like statically linked?
For me, libgnat is only dynamically linked if I run gnatbind
with -shared, but if you -static it should be statically linked.

I cannot find how to statically link the C runtime.
Greg Reagle
2021-04-19 20:54:23 UTC
Permalink
Post by Mattias Andrée
For me, libgnat is only dynamically linked if I run gnatbind
with -shared, but if you -static it should be statically linked.
Thank you. [[ gnatmake hello.adb -bargs -static ]] does the trick, i.e. it
makes the executable larger (of course) by statically linking libgnat. I am
still an Ada beginner so I am not running the linker and binder etc. separately.
Post by Mattias Andrée
I cannot find how to statically link the C runtime.
Yea, it still is dynamically linked to (depends on) several libraries:
$ ldd hello
linux-vdso.so.1 (0x00007ffe57fde000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007ffbb8a93000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007ffbb8a79000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ffbb88b8000)
/lib64/ld-linux-x86-64.so.2 (0x00007ffbb8af5000)
Greg Reagle
2021-04-20 09:40:19 UTC
Permalink
On my machine (using musl), using `-largs -static` is sufficient to get a fully
Thank you. (My machine has glibc). I can do it now.

$ gnatmake hello -largs -static -bargs -static
$ ldd hello
not a dynamic executable

Of course it is big now: 1.2M. I assume if I had musl it would be smaller.
Alex Pilon
2021-04-21 01:50:38 UTC
Permalink
Post by Greg Reagle
$ gnatmake hello -largs -static -bargs -static
$ ldd hello
not a dynamic executable
Of course it is big now: 1.2M. I assume if I had musl it would be smaller.
Wouldn't you need to turn on LTO—link time optimisation, for what little
or not it can do, depending on whether ADA is a runtime and most of the
code overhead is the runtime's and most of that code is run, versus
whether it's just some unused stdlib functions or the like?

That's assuming LTO is a good idea yet and wasn't buggy like I heard
years ago, if that was even true. Am out of the loop.
Jeremy
2021-04-20 01:19:32 UTC
Permalink
Post by Greg Reagle
Okay, I did. Very interesting. I briefly studied Ada many years ago. Do you think that Ada is a viable alternative to Rust? Do you think it is a decent alternative to C for things like operating systems or utilities like sbase or ubase?
I made a Hello World program in Ada. Very fast and small. However, it depends on libgnat-8.so.1. Is there a way to build it so that it does not? Like statically linked?
What does Ada(or Rust for that matter) do better than C?

Surely, you have all of the tools for static analysis, debugging, macros
for C that you would for any other language, no?

I could understand generics, interfaces, iterators, OOP and all of that
from a masturbatory standpoint, but that aspect aside, what utility do
these provide over C?

Jeremy
Kyryl Melekhin
2021-04-19 21:40:33 UTC
Permalink
Post by Jeremy
What does Ada(or Rust for that matter) do better than C?
Surely, you have all of the tools for static analysis, debugging, macros
for C that you would for any other language, no?
I could understand generics, interfaces, iterators, OOP and all of that
from a masturbatory standpoint, but that aspect aside, what utility do
these provide over C?
Jeremy
Rationally, there is nothing better than C. I wish all the other things
did not exist, so that people would stop piling crap on top of crap.
It takes a solid engineering discipline, which is long forgotten.

Kyryl.
Greg Reagle
2021-04-20 10:29:30 UTC
Permalink
Thank you for your explanation Laslo Hunhold. I wholeheartedly agree with you
about the fallibility of human programmers, and the vulnerability of C to
errors. Even though I am a fan of the suckless philosophy and its programs,
which are written in C, I wish that a less error-prone language would be used.

Perhaps I will write (or more likely re-write a C program) a very small program
in Ada as a proof-of-concept of the viability of Ada for several purposes:
- for me to learn Ada
- as a proof-of-concept or illustration of the viability of Ada
- to compare and contrast number of lines of source code, memory usage, speed etc.
- if it turns out well, as advocacy for Ada
- if it turns out ill, as a lesson learned, then I'll continue my search for a good alternative to C

I am open to suggestions. I am thinking something from sbase or ubase.

Ada is a big language with a lot of features. I definitely intend to work with
a small subset of those features.
Greg Reagle
2021-05-02 22:17:45 UTC
Permalink
Post by Greg Reagle
Thank you for your explanation Laslo Hunhold. I wholeheartedly agree
with you about the fallibility of human programmers, and the
vulnerability of C to errors. Even though I am a fan of the suckless
philosophy and its programs, which are written in C, I wish that a
less error-prone language would be used.
you summarized that very well. I completely agree.
Do you have any other suggestions for alternatives to C?
Post by Greg Reagle
Perhaps I will write (or more likely re-write a C program) a very
small program in Ada as a proof-of-concept of the viability of Ada
- for me to learn Ada
- as a proof-of-concept or illustration of the viability of Ada
- to compare and contrast number of lines of source code, memory usage, speed etc.
- if it turns out well, as advocacy for Ada
- if it turns out ill, as a lesson learned, then I'll continue my
search for a good alternative to C
I am open to suggestions. I am thinking something from sbase or ubase.
How did your approach turn out?
I haven't started re-writing anything yet. I am reading Ada tutorials and
learning Ada. Thanks for asking. I'll let you know if/when I try it.
Anders Damsgaard
2021-05-05 13:48:16 UTC
Permalink
On Sun, 02 May 2021 18:17:45 -0400
Dear Greg,
Post by Greg Reagle
Do you have any other suggestions for alternatives to C?
this question is too general. For academic purposes (HPC, data
analysis, numerical mathematics, statistics, etc.) I can recommend
Julia.
If you want to do simple tasks on your workstation like solve some ODEs,
Julia is a viable alternative to MATLAB/Python because of its relative
speed and ease of use.

However, I would *never* consider Julia a viable alternative to C/FORTRAN
tasks, including numerical simulations and massively parallel deployment
on HPC systems. I've worked with both Julia and C, and strongly advise
against Julia for those purposes. While promising on paper, the reality
is that the language is immature which creates issues with code
compability between versions. One example is that they flip-flop between
variable scope in global space, effectively breaking most scripts without
warning. Also, the julia+python+blas dependencies of installed packages
and computational overhead quickly become very significant. Furthermore,
the garbage collection is poor and leads to orders-of-magnitude increase
in memory footprint over days of running an iterative simulation.
--
Anders Damsgaard
https://adamsgaard.dk
gopher://adamsgaard.dk
Jeremy
2021-04-20 13:45:40 UTC
Permalink
The strong point over Rust is readability, stronger guarantees, built-in
concurrency and the fact that it's ISO-standardized, among many other
things. To see how far you can go with Ada (using SPARK, a very close
subset), read chapter 2.2 in [0]. All the tooling is GPL licensed, but
they also make money with professional-tier packages.
Regarding readability: in terms of the just the standard libraries, I
agree that Rust is more readable than C, especially it comes to iterating
and generics.

The barriers to entry for hacking the compiler, however, increase as
these features are added.

etywyniel has suggested an implementation of generics in C that use M4:
https://github.com/etwyniel/c-generics

For example:
VEC(int) v;
v = VEC_NEW(int);

printf("capacity: %zu\n", v.cap);
PUSH(v, 55, 95, 88, 1, 2, 3, 4);

SLICE(int) sl = VEC_SLICE(int, &v, :4);
printf("Slice length: %zu\n", sl.size);

Note that, M4's utility is not restricted to C preprocessing, it has
applications in many other languages. M4 is a simple syntax and can be
implmented in very few lines of code:

$ git clone https://github.com/eunuchs/heirloom-project
$ wc -l heirloom-project/heirloom/heirloom-devtools/m4/* | grep total
2578 total

Regarding ISO-standardization: could you explain a bit more about the
value of this?

Regarding built-in concurrency: I would argue that pipe(3) & select(3)
is sufficient for built-in concurrency, though I understand this debate
is on-going.
Which stronger guarantees am I talking about? You can do contractual
programming, guarantee there are no runtime-errors statically (!, i.e.
at compile time), prove statically that there are no data-races (even
in concurrent programs, how nice is that?) and Ada has had a proper
memory ownership model since the 80's which Rust is selling us like
this big new invention. And such guarantees are good to have when you
write a program responsible for actuating nuclear-reactor-control-rods
or the avionics of an aeroplane.
I agree that Rust is better at marketing memory ownership. I'd argue
that Rust is better at marketing as a whole.

Have a look at the arguments you can pass to "-fsanitize=" in gcc(1).
Post by Kyryl Melekhin
Rationally, there is nothing better than C. I wish all the other
things did not exist, so that people would stop piling crap on top of
crap. It takes a solid engineering discipline, which is long forgotten.
But is the C-ecosystem really so light? We're using a slew of static
analyzers, debuggers, etc. to fix our C programs, and even though I've
been programming in C for a decade and would call myself relatively
good at it, I still keep on making mistakes.
Isn't this the essence of UNIX?

Rust is an incredibly fun language to write in, and I believe that the
enthusiasm for it is unparalleled, however, I'm not certain it's doing
anything better in terms of debugging & static analysis compared to the
C ecosystem.

Jeremy
Greg Reagle
2021-04-20 14:23:35 UTC
Permalink
Post by Jeremy
Have a look at the arguments you can pass to "-fsanitize=" in gcc(1).
I am glad that you pointed that out to me--thank you. Does clang have
comparable functionality?

I gave up on using dvtm a while ago (now I use tmux which is good) because it
would keep crashing. And I could not figure out how to debug the crashes or get
specific information about the cause of the crashes. If I had known about these
options then I would have compiled dvtm with them and maybe gave better bug
reports. (Though I know C, I am not an expert in C.)

Can someone point me to an article or blog post recommending which of these
sanitize options would be recommended for general daily use?

Are there any operating systems or (Linux) distributions that use these run-time
checks by default, i.e. their binary packages are compiled with them?

If enabling these run-time checks adds 5%, 10%, or even 25% to the run time or
memory usage of a presumably already fast and small C binary executable, then it
is worth it to me.
Miles Rout
2021-04-20 14:47:16 UTC
Permalink
Post by Greg Reagle
Can someone point me to an article or blog post recommending which of these
sanitize options would be recommended for general daily use?
Take your favourite Makefile and add

CFLAGS += -fsanitize=address -fsanitize=undefined
LDFLAGS += -lasan -lubsan

You might also need CFLAGS += -fanalyzer.

Then (at least with GCC 10.3.0) it should be as simple as building and
running the program.
Jeremy
2021-04-24 02:12:04 UTC
Permalink
Post by Greg Reagle
I gave up on using dvtm a while ago (now I use tmux which is good) because it
would keep crashing. And I could not figure out how to debug the crashes or get
specific information about the cause of the crashes. If I had known about these
options then I would have compiled dvtm with them and maybe gave better bug
reports. (Though I know C, I am not an expert in C.)
I know what you're talking about & it's a pain in the ass. I believe
this is due to the ANSI parser implementation(vt.c) that DVTM uses.

I wrote a library, libst(a fork of st), and modified st, dvtm to link against it:
https://github.com/jeremybobbin/libst

Try compiling & installing libst, then compile & run dvtm in libst/examples.

As much as I love dvtm, I believe it's a captive user interface, and
lacks the extensibility that a terminal multiplexer could/should provide.

Attempting to address this, I wrote, what I believe to be, a suckless
approach to terminal multiplexing - svtm:
https://github.com/jeremybobbin/svtm

svtm is a composition of primarily 4 programs:
- abduco - {at,de}tach
- svt - TTY state/dumping/scrolling
- bmac - byte-for-byte macros
- itty - lets you run TTY input through a filter(such as bmac)

I'd like to add a "paner" program to that list, but for now, the above
is all you need to express any terminal-oriented workflow in a UNIX
environment.

I'm curious as to what y'all think.

Jeremy
Greg Reagle
2021-04-24 08:15:09 UTC
Permalink
Post by Jeremy
https://github.com/jeremybobbin/libst
Try compiling & installing libst, then compile & run dvtm in libst/examples.
Okay, I am trying it. I get [[dvtm.c:39:10: fatal error: /usr/local/include/libst.h: Permission denied]]. Add these chmod lines to your Makefile:
cp -f libst.a $(DESTDIR)$(PREFIX)/lib
chmod 644 $(DESTDIR)$(PREFIX)/lib/libst.a
cp -f libst.h $(DESTDIR)$(PREFIX)/include
chmod 644 $(DESTDIR)$(PREFIX)/include/libst.h

When I compile examples/dvtm, I get a page full of warnings. Can you clean them up?

Would you be willing to provide a way (perhaps a Makefile target) to compile examples/dvtm with the extra checks that gcc and clang are capable of doing? I mean things like -g, -fsanitize=address -fsanitize=undefined, -lasan -lubsan, and so forth. Flags that are useful for debugging.

I tried examples/dvtm for one minute and it works okay, FYI. I have to use it a lot longer than that to reproduce a crash though.
Greg Reagle
2021-04-24 08:46:26 UTC
Permalink
All of your programs/libraries get installed into /usr/local/bin except svtm which gets installed into $(HOME)/.local/bin Why is that? If you are going to stay with $HOME, then remove "sudo" from the last step of your installation instruction:
[[git clone https://github.com/jeremybobbin/libst && \
cd libst && make && sudo make install && \
cd examples/svt && make && sudo make install && cd ../../../ && \
git clone https://github.com/jeremybobbin/sthkd && \
cd sthkd && make && sudo make install && cd ../ && \
git clone https://github.com/martanne/abduco && \
cd abduco && ./configure && make && sudo make install && \
git clone https://github.com/jeremybobbin/svtm && \
cd svtm && sudo make install
]]

You have sthkd as both its own git repo and as a subdirectory of libst. I think you ought to choose one or the other.
Jeremy
2021-04-28 19:34:04 UTC
Permalink
Post by Greg Reagle
[[git clone https://github.com/jeremybobbin/libst && \
cd libst && make && sudo make install && \
cd examples/svt && make && sudo make install && cd ../../../ && \
git clone https://github.com/jeremybobbin/sthkd && \
cd sthkd && make && sudo make install && cd ../ && \
git clone https://github.com/martanne/abduco && \
cd abduco && ./configure && make && sudo make install && \
git clone https://github.com/jeremybobbin/svtm && \
cd svtm && sudo make install
]]
Thanks for pointing that out - fixed.
Post by Greg Reagle
You have sthkd as both its own git repo and as a subdirectory of libst. I think you ought to choose one or the other.
You may be confusing it with sthkd with svt.
svt is just an ANSI state keeper for scrolling, dumping & re-attaching.
sthkd is a play on sxhkd. It just allows you to run commands when you do certain key strokes.

svt reads commands(like redraw, dump, scoll) from a fifo.
svtm brings these together - giving key bindings similar to dvtm:
- ^Gc: create a new window
- ^Gj: next window
- ^Gk: previous window
- ^Ge: dump buffer into editor
- ^Gu: scroll up
- ^Gd: scroll down
Ross Mohn
2021-04-26 14:10:20 UTC
Permalink
Post by Jeremy
Post by Greg Reagle
I gave up on using dvtm a while ago (now I use tmux which is good) because it
would keep crashing. And I could not figure out how to debug the crashes or get
specific information about the cause of the crashes. If I had known about these
options then I would have compiled dvtm with them and maybe gave better bug
reports. (Though I know C, I am not an expert in C.)
I know what you're talking about & it's a pain in the ass. I believe
this is due to the ANSI parser implementation(vt.c) that DVTM uses.
https://github.com/jeremybobbin/libst
Try compiling & installing libst, then compile & run dvtm in libst/examples.
As much as I love dvtm, I believe it's a captive user interface, and
lacks the extensibility that a terminal multiplexer could/should provide.
Attempting to address this, I wrote, what I believe to be, a suckless
https://github.com/jeremybobbin/svtm
- abduco - {at,de}tach
- svt - TTY state/dumping/scrolling
- bmac - byte-for-byte macros
- itty - lets you run TTY input through a filter(such as bmac)
I'd like to add a "paner" program to that list, but for now, the above
is all you need to express any terminal-oriented workflow in a UNIX
environment.
I'm curious as to what y'all think.
Jeremy
I and my entire team have been actively and successfully using dvtm for
years. I haven't had it crash in a long while now, and I regularly keep
sessions alive for months. However, I am very interested in using
something as you describe above, with a library version of st that is
kept up-to-date. I didn't get your svtm to work out-of-the-box, but I
will continue to debug it myself. I got all the programs to compile
fine, but did go into each Makefile and, where necessary, added the '?'
character to this line "PREFIX ?= /usr/local".

-Ross
Mattias Andrée
2021-04-26 18:39:14 UTC
Permalink
On Mon, 26 Apr 2021 10:10:20 -0400
Post by Ross Mohn
Post by Jeremy
Post by Greg Reagle
I gave up on using dvtm a while ago (now I use tmux which is good) because it
would keep crashing. And I could not figure out how to debug the crashes or get
specific information about the cause of the crashes. If I had known about these
options then I would have compiled dvtm with them and maybe gave better bug
reports. (Though I know C, I am not an expert in C.)
I know what you're talking about & it's a pain in the ass. I believe
this is due to the ANSI parser implementation(vt.c) that DVTM uses.
https://github.com/jeremybobbin/libst
Try compiling & installing libst, then compile & run dvtm in libst/examples.
As much as I love dvtm, I believe it's a captive user interface, and
lacks the extensibility that a terminal multiplexer could/should provide.
Attempting to address this, I wrote, what I believe to be, a suckless
https://github.com/jeremybobbin/svtm
- abduco - {at,de}tach
- svt - TTY state/dumping/scrolling
- bmac - byte-for-byte macros
- itty - lets you run TTY input through a filter(such as bmac)
I'd like to add a "paner" program to that list, but for now, the above
is all you need to express any terminal-oriented workflow in a UNIX
environment.
I'm curious as to what y'all think.
Jeremy
I and my entire team have been actively and successfully using dvtm for
years. I haven't had it crash in a long while now, and I regularly keep
sessions alive for months. However, I am very interested in using
something as you describe above, with a library version of st that is
kept up-to-date. I didn't get your svtm to work out-of-the-box, but I
will continue to debug it myself. I got all the programs to compile
fine, but did go into each Makefile and, where necessary, added the '?'
character to this line "PREFIX ?= /usr/local".
Why do you need `?=`. The only difference between `=` and `?=`?

Apart from `=` beginning the only assignment operator defined by POSIX,
is that `?=` has no effect if the variable is already defined wheras
`=` does not have any effect if the variable is set in the command line.
Post by Ross Mohn
-Ross
Ross Mohn
2021-04-26 19:43:09 UTC
Permalink
Post by Mattias Andrée
On Mon, 26 Apr 2021 10:10:20 -0400
Post by Ross Mohn
Post by Jeremy
Post by Greg Reagle
I gave up on using dvtm a while ago (now I use tmux which is good) because it
would keep crashing. And I could not figure out how to debug the crashes or get
specific information about the cause of the crashes. If I had known about these
options then I would have compiled dvtm with them and maybe gave better bug
reports. (Though I know C, I am not an expert in C.)
I know what you're talking about & it's a pain in the ass. I believe
this is due to the ANSI parser implementation(vt.c) that DVTM uses.
https://github.com/jeremybobbin/libst
Try compiling & installing libst, then compile & run dvtm in libst/examples.
As much as I love dvtm, I believe it's a captive user interface, and
lacks the extensibility that a terminal multiplexer could/should provide.
Attempting to address this, I wrote, what I believe to be, a suckless
https://github.com/jeremybobbin/svtm
- abduco - {at,de}tach
- svt - TTY state/dumping/scrolling
- bmac - byte-for-byte macros
- itty - lets you run TTY input through a filter(such as bmac)
I'd like to add a "paner" program to that list, but for now, the above
is all you need to express any terminal-oriented workflow in a UNIX
environment.
I'm curious as to what y'all think.
Jeremy
I and my entire team have been actively and successfully using dvtm for
years. I haven't had it crash in a long while now, and I regularly keep
sessions alive for months. However, I am very interested in using
something as you describe above, with a library version of st that is
kept up-to-date. I didn't get your svtm to work out-of-the-box, but I
will continue to debug it myself. I got all the programs to compile
fine, but did go into each Makefile and, where necessary, added the '?'
character to this line "PREFIX ?= /usr/local".
Why do you need `?=`. The only difference between `=` and `?=`?
Apart from `=` beginning the only assignment operator defined by POSIX,
is that `?=` has no effect if the variable is already defined wheras
`=` does not have any effect if the variable is set in the command line.
Post by Ross Mohn
-Ross
I have PREFIX defined in my environment and make use if it in scripts as
well as in Makefiles, so I don't generally have to pass it in on the
commandline. I could certainly run it as `PREFIX=$PREFIX make`. I have
to use my own PREFIX on the several shared servers I use where I compile
and install my own apps local to just me. It's fine to make a decision
to not use `?=` because of POSIX or whatever.
Quentin Rameau
2021-04-26 20:16:05 UTC
Permalink
Hello,
Post by Ross Mohn
I have PREFIX defined in my environment and make use if it in scripts as
well as in Makefiles, so I don't generally have to pass it in on the
commandline. I could certainly run it as `PREFIX=$PREFIX make`.I have
to use my own PREFIX on the several shared servers I use where I compile
and install my own apps local to just me.
You could run make -e, if you insist on using environment variables (I
find this a bit dangerous though).

Or just, almost as you said, make PREFIX="$PREFIX", that could even be
an alias (a bit more reliable than make -e).
Post by Ross Mohn
It's fine to make a decision to not use `?=` because of POSIX or whatever.
Yes it is
Jeremy
2021-04-28 16:21:57 UTC
Permalink
Post by Ross Mohn
Post by Mattias Andrée
Post by Ross Mohn
I and my entire team have been actively and successfully using dvtm for
years. I haven't had it crash in a long while now, and I regularly keep
sessions alive for months. However, I am very interested in using
something as you describe above, with a library version of st that is
kept up-to-date. I didn't get your svtm to work out-of-the-box, but I
will continue to debug it myself. I got all the programs to compile
fine, but did go into each Makefile and, where necessary, added the '?'
character to this line "PREFIX ?= /usr/local".
Why do you need `?=`. The only difference between `=` and `?=`?
Apart from `=` beginning the only assignment operator defined by POSIX,
is that `?=` has no effect if the variable is already defined wheras
`=` does not have any effect if the variable is set in the command line.
I have PREFIX defined in my environment and make use if it in scripts as
well as in Makefiles, so I don't generally have to pass it in on the
commandline. I could certainly run it as `PREFIX=$PREFIX make`. I have to
use my own PREFIX on the several shared servers I use where I compile and
install my own apps local to just me. It's fine to make a decision to not
use `?=` because of POSIX or whatever.
"because of POSIX or whatever" XD

It should work out of the box now.
Miles Rout
2021-04-20 14:38:48 UTC
Permalink
Post by Jeremy
Regarding readability: in terms of the just the standard libraries, I
agree that Rust is more readable than C, especially it comes to iterating
and generics.
impl I<Don't> {
fn know<'a, How::Someone>(could: &'a Say) -> This<'a>
with A: Straight<Face> + Send + Sync + 'a
{ ... }

static
struct this *
is_readable(struct to *me)
{ ... }

Syntactically, Rust's generics are a total mess.

Rust's iteration is even worse. Something that should be a simple
matter of for (i=0;i<N;i++) is turned into a frightful palaver with the
compiler with layer upon layer upon layer of complexity, and in the end
it all ends up getting compiled (over the course of several minutes)
into exactly the same for loop!
Post by Jeremy
https://github.com/etwyniel/c-generics
I really like this. Thanks for linking to it. This is the sort of
thing that we (the programming community in general) need more of:
stepping back from complexity and doing the minimum needed to actually
work *effectively*. Does this have all the features of C++ templates?
Of course not. It has the features one actually needs if one wants
generic data structures (which one probably doesn't need anyway).
Post by Jeremy
Regarding ISO-standardization: could you explain a bit more about the
value of this?
Standardisation means there's a document you can read to understand
whether something is a bug in your code, a bug in someone else's code, a
bug in the implementation or (heavens forbid!) a bug in the
specification. ISO-standardisation is the gold standard of
standardisation because the rigorous process behind it ensures that
every necessary detail is fixed down in precise technical writing.

One of the side-benefits of this kind of writing is that it gives people
a language in which to talk about the language. I'm not sure whether
before C standardisation terms like declaration-specifier and declarator
were precisely demarked in the way they are now. But now that they are,
two people using different implementations in different countries with
different backgrounds can establish a consensus on the correct
syntactic interpretation of any piece of code.

Of course in the case of C++, that interpretation might be 'it's
syntactically correct... if the Riemann hypothesis is true'. But it's
better than you can do for Rust, where what is correct behaviour is...
whatever rustc does. Rust evangelists love to talk about how much
undefined behaviour C has, but it only has undefined behaviour because
it has defined behaviour! Undefined behaviour is behaviour that isn't
defined... by the standard. Everything in Rust is undefined behaviour!
Post by Jeremy
Regarding built-in concurrency: I would argue that pipe(3) & select(3)
is sufficient for built-in concurrency, though I understand this debate
is on-going.
+1.

When OOP became trendy in the 80s-90s-00s, almost every
language out there either added an OOP system or an 'Object ____'
variant was created of it with one. We now near-universally regard this
as a pretty bad idea.

It is my opinion that the profusion of async/await across the
programming world will be viewed in 15 years like the profusion of OOP
across the programming world is viewed now: a bad solution to a problem
("programming in the large" for OOP, "web scale" for async/await) that
doesn't actually really exist!
Post by Jeremy
I agree that Rust is better at marketing memory ownership. I'd argue
that Rust is better at marketing as a whole.
...
Rust is an incredibly fun language to write in, and I believe that the
enthusiasm for it is unparalleled, however, I'm not certain it's doing
anything better in terms of debugging & static analysis compared to the
C ecosystem.
Rust is marketed to a... certain kind of person. I don't think I need
to go into detail. As for their enthusiasm, my view is that they're
incredibly enthusiastic and evangelistic about it for one main reason:
it makes them feel smart, and little else does.

---

We'd all be better off if we focused our efforts on tools to make C
programming better. I was thinking today about how useful it would be
to have a way to indicate that a particular variable shouldn't be able
to impact the running time of a function for cryptography purposes.
(Generally, the control flow, resource use or running time of
cryptography-related functions shouldn't depend on secret values, as
those all have the potential to become side channels). If a compiler or
compiler plugin recognised such a directive, it could ensure it didn't
destroy that property. A static analysis tool could check the resulting
object code and warn you. Other tools could verify it with randomised
automated testing, etc.

Generally speaking, these things would be better off as unobtrusive
extensions to C, able to be ignored by a compiler or other tool without
affecting the meaning of the code to retain compatibility. Rust has
many good ideas but it's just not trendy to implement those ideas in C
sadly.
Robert Ransom
2021-05-12 21:36:09 UTC
Permalink
Post by Miles Rout
We'd all be better off if we focused our efforts on tools to make C
programming better. I was thinking today about how useful it would be
to have a way to indicate that a particular variable shouldn't be able
to impact the running time of a function for cryptography purposes.
(Generally, the control flow, resource use or running time of
cryptography-related functions shouldn't depend on secret values, as
those all have the potential to become side channels). If a compiler or
compiler plugin recognised such a directive, it could ensure it didn't
destroy that property. A static analysis tool could check the resulting
object code and warn you. Other tools could verify it with randomised
automated testing, etc.
It would also be useful to be able to indicate that a variable's
value, and values computed from it, must not be left in memory or
registers to be picked up later by misbehaving code or debuggers.
Post by Miles Rout
Generally speaking, these things would be better off as unobtrusive
extensions to C, able to be ignored by a compiler or other tool without
affecting the meaning of the code to retain compatibility. Rust has
many good ideas but it's just not trendy to implement those ideas in C
sadly.
LLVM and Rust are well-funded. Funding is what attracts the 'trendy'
community, for good and ill.
Teodoro Santoni
2021-04-20 10:50:40 UTC
Permalink
I'm certain the only main reason Ada wasn't picked up is because it was
developed in the military, and the hippies in the FSF didn't like that.
Probably it's a matter of power (no GNU folks in that standard
commission) and of taste (Ada never been a hip program, the object
oriented rubbish occupied twenty years of GNU and FSF work alone, and
GNU traditionally embodies an important Lisp community).
Another problem is a lack of compiler diversity, but don't we also have
that with C? And other than C, which becomes infested with
GNU-extensions on a massive scale, Ada is still developed by consortium
and relatively safe from that.
Imho there is good fragmentation and bad fragmentation, compiler
diversity ends up with creating a corporation or a foundation or a
political body to control it and then it's bad fragmentation.
Sebastian LaVine
2021-04-20 23:53:04 UTC
Permalink
I am curious, what experiences have people had with Go?
--
Sebastian LaVine | https://smlavine.com
Wolf
2021-04-24 11:01:53 UTC
Permalink
Hello,
Post by Sebastian LaVine
I am curious, what experiences have people had with Go?
The language is kinda fine I guess? It gets the job done, but I cannot
say I enjoy writing code in it that much. And some design choices
(context.Context) are in my opinion weird and grow through all of the
code like cancer.

I however do mind the huge binaries, whole ecosystem full of NIH
syndrome and some (IMHO) questionable choices in some parts of standard
library (especially encoding/json is... fun).

W.
--
There are only two hard things in Computer Science:
cache invalidation, naming things and off-by-one errors.
Sergey Matveev
2021-04-17 15:54:56 UTC
Permalink
Greetings!
we would save much more energy by banning autohell, Rust, bloated
electron-apps and Qt.
Well, I can only fully agree with that!
My comment about hash functions performance was only related to
defective idea that slowing them down will help us with those
proof-of-work schemes. Of course using of SHA256 won't hurt much
in practice and won't be even noticeable comparing to the things
you named. Personally I use it for my projects on downloads page.
Especially autohell is really a huge waste of time and energy
Completely agree with that too! When I moved my C-projects to redo build
system, in which I also make configuration discovery, I am still
astonished how fast the whole build process can be. ./configure often
seems taking 90% of all build time, without any ability to parallelize,
without any benefit.

One thing that surprised me much was also Zstandard compression. Faster
and higher compression ratio than gzip, and very fast decompression.
Some time ago I used to use xz, because of its bandwidth saving. But
completely moved to zstd's slow modes, that are just slightly worse than
xz, but incredibly fast at decompression -- my CPU is not a
decompression bottleneck anymore and that was worth of it. I heard that
Arch Linux and Fedora moved to its usage and, however being the BSD-fan,
I respect their move to it.
--
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263 6422 AE1A 8109 E498 57EF
Hiltjo Posthuma
2021-04-17 17:13:32 UTC
Permalink
Post by Sergey Matveev
Post by Markus Wichmann
I don't care about the speed of a hash function.
If we a talking here about checking software integrity, then speed is
important. Millions of people check the hash of downloaded files -- if
it is slow, then huge quantity of time/energy is wasted. Less time you
spent on hashing, less energy is wasted. SHA2 (and SHA3 too, if we are
not talking about KangarooTwelve modifications) is the worst choice from
ecology point of view.
Generating hashes for all dl.suckless tarball files (287 files) takes 0.75
seconds in total, it is not an issue.

What is the preferred hash by Greta?
Post by Sergey Matveev
Post by Markus Wichmann
I think bitcoin mining is pollution
I agreed. But I see nothing in common between proof-of-works and hash
functions. PoWs ("good one", like Argon2 that can be used for that task)
uses special construction -- it does not matter if underlying hash is
fast or slow, because we can simply make more iterations with it.
Post by Markus Wichmann
and in case of password cracking, having a slower hash
function is an advantage for me
That can only mean that you still use an ancient PBKDF2-like schemes of
password strengthening. A long time ago a Password Hashing Competition
brought us memory-hardened hashing functions like Argon2 (winner), and
Balloon (appeared after PHC, but my favourite). Hash function speed do
not play any considerable role there, because memory is actively used
and *is* the bottleneck for brute forcing operation. Anyway, slower hash
for PBKDF2 means that number of iterations will be smaller -- faster
hash means more number of iterations. So only the dead simple password
hashing constructions like hash(password) will benefit from slower hash,
that is just silly and unacceptable to use at all, if you worry about
brute-force cracking.
Post by Markus Wichmann
I am not a cryptographer. From what I understand about SHA-3, it offers
a better HMAC function
1) Do not confuse "MAC" and "HMAC". HMAC is a special construction
(H(K XOR opad) || H((K XOR ipad) || m)) that can make a MAC with hash
functions. It is required at least because many hash functions are
constructed as Merkle–Damgård, that has some properties preventing
simple H(K || m) usage.
2) SHA3 is not Merkle–Damgård and can be safely used as a MAC with just
H(K || m) calculation. HMAC can be used with SHA3 without any problems,
but it just calls hash function one more time. For big messages that
does not play any noticeable role at all (hashing of terabyte and one
more hash of dozens of bytes), but for small one "native" SHA3-MAC just
will be faster.
3) "Native" SHA3-MAC is not better. It is just the same, from security
point of view. Nothing wrong with HMAC, nothing wrong with SHA3-MAC.
Latter will be just faster especially for small messages.
Post by Markus Wichmann
(the whole padding thing is not needed anymore,
since hash extension attacks are not possible).
Yes, HMAC prevents them. SHA3 is simply immune for them out of box,
because of its sponge construction. Nothing is wrong with both of them,
noone is better.
Post by Markus Wichmann
Cryptographers had a big old competition over the
"best" hashing algorithm and in 2012, Keccak won.
There are too many questions what is "best". Keccak won mainly because
it is not Merkle-Damgård construction that ****probably**** someday can
be found to be problematic with more issues. ****Possibly**** that
construction itself was a mistake. SHA3 is a ready replacement **if**
something is wrong with SHA2. Official statements require to replace MD5
and SHA1 with SHA2+. But there are no statements and recommendation to
replace SHA2 with SHA3, because nothing is wrong with SHA2.
Post by Markus Wichmann
And I will continue to advocate for its use exclusively over SHA-2 to
keep the zoo of hash functions small. SHA-3 should be used for its HMAC
property alone, and it is adequate for all other tasks, so there is also
no reason to keep SHA-2 around.
Nothing wrong to use SHA3. But it is wrong to say that SHA2 is somehow
bad and must be replaced. That is why even newer creations, knowing that
SHA3 is already here for a long time, still choosing SHA2 exactly for
keeping the zoo of hash functions small, because noone is going to
replace already existing SHA2-driven software with SHA3. Why Git is not
moving to SHA3 from SHA1? Because there is no point in that, nothing is
wrong with SHA2 existing in every piece of libraries.
But anyway there is no possibility just to stay with only SHA2 or only
SHA3. Because all of them are slow. I use hash alone for verifying data
integrity and I need for speed. That is why BLAKE2/Skein are so
popular and BLAKE3 is gaining popularity too. And as I remember, Skein
even has security marging higher than Keccak. BLAKE2 has completely
acceptable margin for all cryptographic tasks too. Possibly KangarooTwelve
will gain popularity too. But zoo will always be with us.
And do not overestimate importance of having MAC function alone.
Currently all protocols are moving to AEAD-based ciphersuites (TLS1.3
and Noise work only with AEAD ones), where MAC is not used alone anymore.
CCM and EAX AEAD modes uses only the cipher function, GCM uses GHASH
(not a hash, not a cipher), ChaCha20-Poly1305 uses Poly1305 which is
special onetime MAC function, but none of them use anything related to
hash functions or HMACs. Of course there are hundreds of places with
MAC-alone-usage exists, but not as dozens years ago where it applied to
every IPsec/TLS/whatever packet to authenticate it.
And if you protocol assumes that various hashes can be used with it
(Merkle-Damgård-based or anything else), then you are forced to use
and rememeber about HMAC. You can tell to use SHA3 directly, if SHA3 is
used as a hash, but it is complication. That is why modern TLS1.3 or
Noise still use HMAC, even if SHA3 is used with them. There was
discussion about that in Noise mailllist, because even BLAKE2 offers the
same ability to use it as a MAC directly, Skein too (all of them are
widely used), but decisions is to leave HMAC anyway, for protocol
simplification. So it is hard to see where native-SHA3-MAC can be used
in practice in protocols without hard-coded algorithms. Nothing wrong
with HMAC-SHA3, except for one more small message hashing at the end,
that is negligible and won't be used for transport traffic because of
AEAD ciphers.
--
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263 6422 AE1A 8109 E498 57EF
--
Kind regards,
Hiltjo
Sergey Matveev
2021-04-17 17:47:14 UTC
Permalink
Post by Hiltjo Posthuma
Generating hashes for all dl.suckless tarball files (287 files) takes 0.75
seconds in total, it is not an issue.
Agreed of course. SHA2 is currently the best tradeoff. The only question
could remain: SHA256 vs SHA512 (that is faster on 64-bit platforms).
Post by Hiltjo Posthuma
What is the preferred hash by Greta?
What is that?
--
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263 6422 AE1A 8109 E498 57EF
Sergey Matveev
2021-04-17 21:22:42 UTC
Permalink
Post by Sergey Matveev
Post by Hiltjo Posthuma
What is the preferred hash by Greta?
What is that?
I was told offlist that (seems) you were refering to Greta Thunberg.
I suppose she would blame us all, because we are using cryptographic
hash functions for the things where simpler, cheaper and faster
specialized integrity check/error detection specialized functions
could be used instead. Although something like BLAKE3 is still
considered cryptographically secure, its single thread is 12 times
faster than SHA256 and scales virtually to any number of threads
(uses Merkle tree out of box).
--
Sergey Matveev (http://www.stargrave.org/)
OpenPGP: CF60 E89A 5923 1E76 E263 6422 AE1A 8109 E498 57EF
Daniel Cegiełka
2021-04-13 17:38:29 UTC
Permalink
Post by Mattias Andrée
On Tue, 13 Apr 2021 16:57:39 +0200
Post by Sagar Acharya
Sure, any good signature. SHA512 is stronger than SHA1, MD5 and SHA256. It shouldn't take a second more than others. Why use a weaker checksum?
SHA512 is actually more than twice as fast as SHA256 on 64-bit machines.
(I don't know which is stronger).
I see no point in having checksums at all, except for detecting bitrot.
BLAKE3 is one the best way to do it:

https://github.com/BLAKE3-team/BLAKE3

even blake2 is better then SHA256 or SHA512. Plus my _OLD_ one-file
implementation of blake2b (license the same as the original) and no
support for keys.

Daniel
Post by Mattias Andrée
Signatures are of course good.
Post by Sagar Acharya
Thanking you
Sagar Acharya
https://designman.org
Post by Daniel Cegiełka
How/where SHA512 is better than SHA256 or SHA1? I don't see any added
value in this. If someone breaks into your server and replace files,
may also regenerate check sums (SHA256/512 or SHA3, scrypt etc.). The
use of MD5 will be equally (un)safe as SHA512 :)
A better solution is e.g. signify from OpenBSD or GnuPG.
https://man.openbsd.org/signify
Daniel
Post by Sagar Acharya
Can we have SHA512 checksums and sig files for the release gzips of suckless software?
Thanking you
Sagar Acharya
https://designman.org
Sagar Acharya
2021-04-16 18:01:56 UTC
Permalink
Was any decision taken with regards to this? Would we have certain checksums and sigs for releases in future?

Thanking you
Sagar Acharya
https://designman.org
Anders Damsgaard
2021-04-16 19:16:30 UTC
Permalink
Post by Sagar Acharya
Was any decision taken with regards to this? Would we have certain checksums and sigs for releases in future?
Thanking you
Sagar Acharya
https://designman.org
Sagar, please realize that people are volunteering their time, and I think
most are doing it for fun. As far as I can tell, you are not entitled to
demand any action. If you want something done, send a patch and expect
it to be carefully scrutinized.
Hiltjo Posthuma
2021-04-16 21:09:48 UTC
Permalink
Post by Anders Damsgaard
Post by Sagar Acharya
Was any decision taken with regards to this? Would we have certain checksums and sigs for releases in future?
Thanking you
Sagar Acharya
https://designman.org
Sagar, please realize that people are volunteering their time, and I think
most are doing it for fun. As far as I can tell, you are not entitled to
demand any action. If you want something done, send a patch and expect
it to be carefully scrutinized.
I agree with this and for now it won't be changed.

The admins team will make a decision about this if needed.
--
Kind regards,
Hiltjo
Sagar Acharya
2021-04-17 05:45:16 UTC
Permalink
Ok. But this is a behavioral change right? How can a patch help in this case?

Admins always protest the decision in almost every community if it isn't theirs. Am I suggesting something harmful here? It takes a minute to sign a release and this improves security. It makes sure that user gets the same piece of code that the dev made.

If that action helps suckless, why be reluctant because I initiated that mail?
Thanking you
Sagar Acharya
https://designman.org
Post by Hiltjo Posthuma
Post by Anders Damsgaard
Post by Sagar Acharya
Was any decision taken with regards to this? Would we have certain checksums and sigs for releases in future?
Thanking you
Sagar Acharya
https://designman.org
Sagar, please realize that people are volunteering their time, and I think
most are doing it for fun. As far as I can tell, you are not entitled to
demand any action. If you want something done, send a patch and expect
it to be carefully scrutinized.
I agree with this and for now it won't be changed.
The admins team will make a decision about this if needed.
--
Kind regards,
Hiltjo
Sebastian LaVine
2021-04-17 06:18:08 UTC
Permalink
Post by Sagar Acharya
Ok. But this is a behavioral change right? How can a patch help in this case?
Admins always protest the decision in almost every community if it isn't theirs. Am I suggesting something harmful here? It takes a minute to sign a release and this improves security. It makes sure that user gets the same piece of code that the dev made.
Decision-making takes time and effort. I don't know Hiltjo and the
others who run suckless, but I'm sure that they are busy folk who handle
lots of different things, not just suckless. They may even occasionally
have fun.

It's not a matter of whether or not this is "harmful". It's a matter of
whether or not it is important enough to make a change to the release
routines that have come about over the years (decades?) that they've
been doing this. It may take a minute to sign a release. But does it
take a minute to change the website so that the checksums are shared
properly? Does it take a minute to coordinate this change across all the
various suckless products? As the debate in this thread has shown: does
it take a minute to decide which algorithm should be used?
Post by Sagar Acharya
If that action helps suckless, why be reluctant because I initiated that mail?
It is possible that some may be reluctant to take suggestions on how
long-standing ways of doing things should be changed from somebody who,
as far as I can tell from the dev list archive, has only contributed so
far by suggesting that long-standing ways of doing things should be changed.

Of course I'm saying this as somebody who has I think contributed to
this list maybe...five or six times in total? 99.9% of the time I just
lurk and read through patches a bit, follow conversations on things. The
technical debate that goes on, I can barely follow sometimes. I just
like my dwm comfy and to stay on top of things that are going on.

What I mean to say is, don't be discouraged if immediate action isn't
taken on something that you have thought about and that you think is
Post by Sagar Acharya
The admins team will make a decision about this if needed.
--
Sebastian LaVine | https://smlavine.com
Hiltjo Posthuma
2021-04-13 15:57:30 UTC
Permalink
Post by Daniel Cegiełka
How/where SHA512 is better than SHA256 or SHA1? I don't see any added
value in this. If someone breaks into your server and replace files,
may also regenerate check sums (SHA256/512 or SHA3, scrypt etc.). The
use of MD5 will be equally (un)safe as SHA512 :)
One example where it would not be equally unsafe is if someone or some distro
mirrors the source-code.
Post by Daniel Cegiełka
A better solution is e.g. signify from OpenBSD or GnuPG.
https://man.openbsd.org/signify
Daniel
Post by Sagar Acharya
Can we have SHA512 checksums and sig files for the release gzips of suckless software?
Thanking you
Sagar Acharya
https://designman.org
--
Kind regards,
Hiltjo
Daniel Cegiełka
2021-04-13 17:20:09 UTC
Permalink
Post by Hiltjo Posthuma
Post by Daniel Cegiełka
How/where SHA512 is better than SHA256 or SHA1? I don't see any added
value in this. If someone breaks into your server and replace files,
may also regenerate check sums (SHA256/512 or SHA3, scrypt etc.). The
use of MD5 will be equally (un)safe as SHA512 :)
One example where it would not be equally unsafe is if someone or some distro
mirrors the source-code.
The only security you get here is making sure the file has not been
corrupted while being transferred over the network. It has nothing to
do with security. If someone takes control of the server, it will
replace the file and generate new checksum. To prevent this, the file
should be secured not with a checksum, but with asymmetric
cryptography (signify, gpg).
Post by Hiltjo Posthuma
Post by Daniel Cegiełka
A better solution is e.g. signify from OpenBSD or GnuPG.
https://man.openbsd.org/signify
Daniel
Post by Sagar Acharya
Can we have SHA512 checksums and sig files for the release gzips of suckless software?
Thanking you
Sagar Acharya
https://designman.org
--
Kind regards,
Hiltjo
Greg Minshall
2021-04-22 08:51:53 UTC
Permalink
fwiw, i think Ada sat (sits) on the "Pascal" side of the "C | Pascal"
divide.

that probably explains a lot of the subsequent history of who used it,
who didn't, etc.
Greg Minshall
2021-05-11 08:15:37 UTC
Permalink
Anders Damsgaard,
Post by Anders Damsgaard
However, I would *never* consider Julia a viable alternative to C/FORTRAN
tasks, including numerical simulations and massively parallel deployment
on HPC systems.
i'm ignorant, but curious. a friend who does high performance computing
is a fan of Julia, and in the past pointed me at

https://www.nextplatform.com/2017/11/28/julia-language-delivers-petascale-hpc-performance/

i'm not a believer in miracles, but the performance here seems fairly
impressive. does this contradict your pessimistic assessment of Julia
(without saying which view is correct)? is it maybe because of
specialized features of Julia (i can imagine Julia's support for SIMD
operations could help a lot with this sort of computing)? or...?

obviously, incompatibility between versions, slow garbage collection,
etc., which you mention below, are not good signs.

cheers, Greg
Post by Anders Damsgaard
While promising on paper, the reality is that the language is
immature which creates issues with code compability between versions.
One example is that they flip-flop between variable scope in global
space, effectively breaking most scripts without warning. Also, the
julia+python+blas dependencies of installed packages and computational
overhead quickly become very significant. Furthermore, the garbage
collection is poor and leads to orders-of-magnitude increase in memory
footprint over days of running an iterative simulation.
Greg Minshall
2021-05-13 02:54:03 UTC
Permalink
Laslo,

thank you very much.
I'm just glad that I, as a numerical mathematician, don't have to use
MATLAB anymore. I initiated and finalized that the current lecture on
numerical mathematics here in Cologne, which I co-supervise, is using
Julia for the first time (instead of MATLAB), and I'm really happy
about that.
congratulations! :)

cheers, Greg

Loading...