Arenas is one of those patterns that very easy to underestimate. I didn't know about it when I started programming and I run into huge performance issue where I needed to deallocate a huge (sometimes tens of GBs consisting of millions of objects) structure just to make a new one. It was often faster to kill the process and start a new one but that had other downsides. At some point we added a simple hand written arena-like allocator and used it along with malloc. The arena was there for objects on that big structure that will all die at the same point and malloc was for all the other things.
The speed-up was impossible to measure because deallocation that used to take up to 30 seconds (especially after repeat cycles of allocating/deallocating) was now instant.
Even though we had very little experience it was trivial to do in C. Imo it's critical for performance oriented language to make using multiple allocators convenient. GC is a known performance killer but so is malloc in some circumstances.
>>That is how all standard library collections in Rust works
Yeah and that's what not going to work for high performance data structures because you need to embed hooks into the objects - not just put objects into a bigger collection object. Once you think in terms of a collection that contains things you have already lost that specific battle.
Another thing that doesn't work very well in Rust (from my understanding, I tried it very briefly) is using multiple memory allocators which is also needed in high performance code. Zig takes care to make it easy and explicit.
The main thing is that he object can be a member of various structures. It can be in big general queue and in priority queue for example. Once you find it and deal with it you can remove it from both without needing to search for it.
Same story for games where it can be in the list of all objects and in the list of objects that might be attacked. Once it's killed you can remove it from all lists without searching for it in every single one.
I really hope that happens but I see those announcement as negotiating tactics. Switching will cost a lot (in training, unavoidable delays and mistakes etc.) and both parties will have incentives to go back to good old days.
I hope I am wrong on this. I hate that public infrastructure and bureaucracy runs on Microsoft.
It looks very nice. One problem I've encountered is that when you make a mistake then the name of the file you have to use disappears and it's impossible to get it back.
What is this website created with btw? I like the style a lot.
In the current election system also almost no one can do anything to verify the results. The percentage is way higher than 95%.
There are many arguments against electronic voting but the current system is terrible and insecure.
>>And this is a deal breaker, as having the population believe and easily able to convince themselves that their elections are free is an extremely important part of democracy, especially when things are not that rosy.
And it's currently not the case at all.
I think blockchain is a terrible idea for about anything. Electronic voting is hard. Voting is hard. It doesn't change the fact that the current system is a complete security joke .
It is extremely easy to convince yourself that the current system works. Numerous people volunteer to work in election monitoring every year, and any person who is not sure can take a day or two off work to do so at their next election.
Plus, the system overall is dead simple, first grade math skills are enough to understand it: we just count the votes in every precinct, and sum up the votes later up. No hashes, no smart group theory schemes, nothing complex.
In my country there is usually a recount in some "suspicious" voting stations. The recount about never gives the same results as the original count. People are not very good at counting even if they have good intentions.
>>First grade math skills are enough to understand it: we just count the votes in every precinct, and sum up the votes later up. No hashes, no smart group theory schemes, nothing complex.
-people are bad at counting
-some people might be bad at counting on purpose
-some people might try to influence the results
This happens all the time as proven by multiple recounts. I am not talking about USA here but about EU countries but I imagine it's the same in USA. You just hope those swings are small enough to not influence the end results. I am sure this is usually true but sometimes it's close and then the odds are at least some of those elections went the wrong way.
The "current election system", in the US, is not one single system. It is much closer to 50 separate systems with their own differences that range from quirks to wildly different fundamentals.
You can't make blanket statements about "the current election system" in the US because of this; you're going to have to talk about things in more specifics, or people in states with well-designed systems are just going to keep popping up explaining why their system genuinely is good.
I tried using Firefox. I had it as default browser for 2 years but I just keep going back to Chromium. Firefox is slow and crashes/hangs too much in my experience. It was even very slow to open my automatically generated tables for accounting (for simple html but very big files because accounting regulation in my previous country of residence were brain dead). I don't think often published benchmarks tell the whole story there.
Now I am back to Brave and very happy. Almost no ads, super fast, doesn't crash or hang.
I tried using Brave. It weirded me out with the crypto stuff and random popups. Now back to Firefox, on all my devices, without any crashes ever. And it's just as fast as chromium, which I very very rarely use for bad websites that do not work with Firefox.
In a very basic case lock free data structures make threads race instead of spin. A thread makes their copy of a part of list/tree/whatever it needs updating, introduces changes to that copy and then tries to substitute their own pointer for the data structure pointer if it hasn't changed in the meantime (there is a CPU atomic instruction for that). If the thread fails (someone changed the pointer in the meantime) it tries again.
I fail to see how a warning doesn't achieve the same thing while allowing you to iterate faster. Unless you're working with barbarians who commit code that complies with warnings to your repo and there is 0 discipline to stop them.
> I fail to see how a warning doesn't achieve the same thing while allowing you to iterate faster.
In almost every code base I have worked with where warnings weren't compile errors, there were hundreds of warnings. Therefore it just best to set all warnings as errors and force people to correct them.
> Unless you're working with barbarians who commit code that complies with warnings to your repo and there is 0 discipline to stop them.
I work with a colleague that doesn't compile/run the code before putting up a MR. I informed my manager who did nothing about it after he did it several times (this was after I personally told him he needed to do it and it was unacceptable).
This BTW this happens more often than you would expect. I have read PRs and had to reject them because I read the code and they wouldn't have worked, so I know the person had never actually run the code.
I am quite a tidy programmers, but it difficult for people even to write commit messages that aren't just "fixed bugs".
> The trick is to have warnings fail CI but not local builds
Which is annoying because the CI pipeline can take like 10 minutes to do the build and then you need to re-commit after turning the warnings on locally.
There are other issue like your code compiles differently in CI vs on your machine, which brings it own issues. Ignored warnings can cause other pieces to fail compilation or execution in other project/libraries. I had this happen in C# and VB.NET.
It is best just to turn on all warnings and errors and be done with it.
I've never heard particularly good reasons for not having it turned on all the time and that includes those mentioned in this thread.
> Then erroring on unused variables will not help you anyway.
The point I am trying to convey, which was a direct response to something the parent said:
"barbarians who commit code that complies with warnings"
IME it is very common for people to just straight up ignore warnings, and issues and sometimes they won't even check the thing compiles. I've worked as a contractor at a number of companies both large and small and this has been a constant.
> Anyway, all your issues sound like management problems. Not all projects are run that badly.
Again the point I was trying to convey is the expectations people have on here are far higher than what some of us have to deal with on a daily basis. So you have to put in loads of automated checks that I don't have to bother with when working with competent people.
> I work with a colleague that doesn't compile/run the code before putting up a MR. I informed my manager who did nothing about it after he did it several times (this was after I personally told him he needed to do it and it was unacceptable).
At this point what you need to do is stop treating compiler warnings as errors, and just have them fire the shock collar.
Negative reinforcement gets a bad rep, but it sure does work.
Yeah but this case just seem to be strictly worse. It makes experimenting worse and it makes it more likely (not less) that unused variables end up in the final version. I get being opinionated about formatting, style etc. to cut endless debates but this choice just seem strictly worse for two things it influences (experimenting and quality of the final code).
If you want to leave a variable unused, you can just assign it to _ (underscore) though. IIRC gofmt (which your editor should run when you save) will warn you about it but your code will compile.
It's a slightly different mindset, for sure, but having gofmt bitch about stuff before you commit it rather than have the compiler bitch about it helps you "clean as you go" rather than writing some hideous ball of C++ and then a day of cleaning the stables once it actually runs. Or at least it does for me...
You're not supposed to question the wisdom of the Go developers. They had a very good reason for making unused variables be an unconfigurable hard error, and they don't need to rigorously justify it.
Warnings are often ignored by developers unless you specifically force warnings to be compile errors (you can do this in most compiler). I work on TypeScript/C# code-bases and unless you force people to tidy up unused imports/using and variables, people will just leave them there.
This BTW can cause issues with dependency chains and cause odd compile issues as a result.
The point being conveyed is your experience is not representative of what commonly occurs. I have worked as a contractor in a number of different orgs both small, large, private and public and more often than not unless you force people to fix these things, they won't.
> Find better managers.
How about you and others with similar attitudes realise that the world isn't perfect and sometimes you have to work with what you got.
Do you think I haven't been looking for a new position? Most of the jobs in my area are going to be more of the same.
Doesn't it make it more likely unused variables stay in the codebase? You want to experiment, the code doesn't compile, you add this (probably by automatic tool), the code now compiles. You're happy with your experiment. As the compiler doesn't complain you commit and junk stays in the code.
Isn't it just bad design that makes both experimenting harder and for unused variables to stay in the code in the final version?
It is indeed quite controversial aspect of Zig's design. I would rather prefer it be a warning. Argument "warnings are always ignored" just doesn't hold because anything can be ignored if there is a way to suppress it.
there was a recent interview where andrew suggested if i understood correctly: the future path of zig is to make all compilations (successful or not) produce an executable. if theres something egregious like a syntax or type error, the produced artifact just prints the error and returns nonzero. for a "unused parameter", the compiler produces the artifact you expect, but returns nonzero (so it gets caught by CI for example.
if you have a syntax error in file A, and file B is just peachy keen, you can keep compiling file B instead of stopping the world. Then the next time you compile, you have already cached the result of file B compilation.
The speed-up was impossible to measure because deallocation that used to take up to 30 seconds (especially after repeat cycles of allocating/deallocating) was now instant.
Even though we had very little experience it was trivial to do in C. Imo it's critical for performance oriented language to make using multiple allocators convenient. GC is a known performance killer but so is malloc in some circumstances.
reply