If you leave, suspended in air, an object of any kind, it will accelerate toward the center of the earth at about 9.8 meters per second per second. This is how we measure gravity. On the moon the gravity is only 16.5% of earth's since objects on it's surface accelerate towards it's center at roughly 1.6 meters per second per second. Knowing both values we can see how much more gravity there is on earth than on the moon. And we can decide a bunch of things based on these observations.
When it comes to software I can tell you with almost complete certainty that Tor Browser is far more trustworthy than Google Chrome. I know it for a fact. One thing is that it's designed to insure maximum levels of privacy, while Google Chrome is literally a spyware program designed to mine your data, so they can sell ads. But my question is not whether Tor is more trustworthy. I know it is. But how much exactly it is more trustworthy. What is the ratio of trustworthiness between the two? How much percent of Tor's trustworthiness is in Google Chrome?
For this to be done I will have to come up with a concrete trustworthiness scale. I'm not claiming that I know how to calculate software trustworthiness. This article is more about an exploration of the idea, rather than a concrete manual on how to do it. Nevertheless I hope to achieve a conversation on the subject at least. Because without a concrete system, all we can hope for are mindless speculations on the matter. Which do not help what so ever when you argue about what piece of software is "better".
_________
As you maybe be already familiar with, I stopped editing highlight videos from the recordings of Moria's Race Live-Streams. You may not be familiar with the other fact, that I have also slowed down a lot with uploading the full recordings. This is due to my intense tiredness and utter laziness the moment the stream ends.
In the last week or so I started feeling a loss of interest to do anything what so ever. But since I do the live-streams, the main progress on the movie still marches forward with a steady pace.
For example, I've reached a technically challenging scene, where the main characters drive a car on the water. I have to maintain speed and water simulation in one shot which sounds relatively simple, but actually is a complex undertaking.
Imagine a shot like this: A car in the foreground, moving extremely fast, with the camera moving together with it, while on the background we see the city, or the racetrack to which she is headed. This means that the entire body of water between the car and the background is always in frame at all times.
If I would try to simulate the entire visible water, I would blow up the computer. But okay. I can probably simulate only the part which the car travels in this time. Which is not that hard as simulating all of the water, but because the car is moving rather quickly through it, it still will be a rather large area of simulation, and therefor I will blow up the computer.
I tried moving the simulation domain mesh close to the car. So it would simulate only the immediate surrounding around the car. But this didn't work. Blender just didn't understand what do I want from him. So I had to try something else.
I can draw an animated texture. A bump map. Which will displace the geometry of the water plane to look like it's splashing. The brighter the pixel will be the more displacement it will do. And I can generate such texture using the dynamic paint tool inside of blender. But there is a catch. It still will blow my computer.
Even though this idea is the lightest of the bunch. It will still require an immense amount of polygons on the water surface to have a smooth bump map on it. Displacement textures are not magic. They still need some geometry to displace. If I don't have the polygons to move, it will not look any good. And since the car travels a very long distance, I will have to have an insane amount of polygons.
A regular way to get polygons is using the normal subdivision method. Imagine one plane. And you cut it in half both ways and get equally sized 4 planes connected together. You can repeat this process many times until you arrive at a dense grid, points of which could be displaced with a texture to make shapes of splashes. The problem is, to make it smooth enough, closer to the camera I need to have roughly one polygon every few pixels of the final image. And because we are looking at if from the perspective of the car moving across it, the far away polygons are way denser. I have too much unnecessary geometry far from camera to accommodate the smoothness right next to the camera.
Some years ago Blender had experimented with an alternative way of subdividing polygons called "Adaptive Subdivision". And this feature still exits in blender under experimental features. The way it works is by subdividing more the closer to the camera it gets. And on each frame it will re-subdivide the plane to accommodate for the motion of the camera. Basically I can tell it to give me one polygon per pixel and it will do this all across the plane. So the farther you go from the camera, the less polygons you will see, but from the camera's perspective each polygon aligns roughly with one pixel of the final image.
It's still an experimental feature, but I'm willing to take the risk and use it for the splashes of the water. At least while it's moving fast. But in the end of the sequence, with slower motion I might actually use the regular fluid simulation.
Does Free Software tends to be more trustworthy?
On a few of Richard Stallman talks that happened after the famous TED talk that he now using everywhere for the sake of being a great summary of what he wants to say, he added a few bits not presents in the TED talk. For example, he added a poorly drawn graph representing malwareness vs honestness that looks like this:
This image has stuck with me. How does he know exactly where to put those dots? Well, quite frankly from the looks of it, it seems like this graph is very speculative. He doesn't have any concrete data apart from perhaps the stories on GNU.ORG/MALWARE . And those do not contain the entirety of the picture. He is operating at a much looser spectrum. A political one. Not concrete scientific one.
But this graph could help us derive a scientific base to calculate trustworthiness of any specific program. But to do that we have to understand what trustworthiness is. You trust the program to do what? Or not to do what? Not to spy on you? Well, how about other types of malicious functionality? How about it's just being annoying? There are computer viruses that are written by trolls to just be annoying. They are not spyware. They are annoyingware.
For example, there could be a virus that makes your screen change resolutions every 10 seconds. It's not spyware. But it's malware. You want to use the computer and it makes it harder for you do to it. So malware is anything that does stuff not as you wish...
Freedom 0 of Free Software states that the user might use the program as the user wishes. How ever the user wishes. Maybe non-malware is a program that provides the freedom 0? Well we have to dig deeper for this...
Function versus freedom
You have the freedom 0 with any package manager that comes with a GNU / Linux distro endorsed by the Free Software Foundation because all of the programs on those distros are Free Software and therefor hand you all of the four freedoms including the zeroth one. But as you may see a lot of people are rather unhappy with those package managers since they "do not allow" them to install non-free software.
A package manager is agnostic to what you want to install. Including the package manager in one of those distros. But to install a program it needs to know where to get the program from. And this information is stored in a repository. Free Software Foundation endorses only those distros who not merely ship 100% Free Software distros. But also that by default have no easy way to install non-free software by mistake. Meaning that non of those distros have any proprietary software in their repositories.
You can think of repositories installed with the package manager as plugins. Even though they are online servers. But for the sake of this argument, lets imagine them as plugins. The once you get from, let's say, Trisquel OS will not contain the "plugins" necessary to install Zoom or Discord since they are proprietary software. And because those "plugins" are missing, you don't have the function of installing Zoom or Discord.
This is fundamentally different than not having the freedom to install Zoom or Discord. If you add the correct "plugin" the program will be able to satisfy your will and install the damn things. So you have the freedom to do it. They just don't provide the function by default.
And quite frankly software should not include absolutely every feature on earth ever. Because different pieces of software are used to do different things with them. Gimp for example has no use to have GPS features, or being able to tell you what bus to take, since it's an image editor and not a map app. Firefox has no need to have fluid simulation dynamics code, since it's a web-browser and not a visual effect or 3D animation tool. Vise versa Blender doesn't have to be able to render web-pages.
But... You should still provide the freedom to do all of these things. Even if they will require some manual work. What you don't want to do is what DRM tries to impose on people. Or what Apple device do. Where you literally need to give it a lobotomy to unlock an ability to install applications not present in the main repository. Basically the easier it is to change functionality of any given piece of software to whatever you want it to do, the better.
Emacs is a text editor. But it has plugins which are games. It has a plugin which is a web-browser. And so on and so forth. While, for example, Microsoft Notepad ( as far as I know ) doesn't even have a plugin system. Which is not too bad. Not every program is required to have a plugin system. Any way to extend it, or change it to do what the user wishes is sufficient. But Notepad doesn't seem to have any.
So here we get to the other freedom. Freedom 1. Which is quite interesting. Freedom 1 states that the user should be able to study and / or modify the source code of the program. So the program does what the user wishes. So let's say there is a very small text editor, something like notepad, but with it's source code totally available. So the user can simply edit it until the program does what the user wishes. Suddenly it's as good or even better than plugins.
So to give the user freedom 0 you have to give the user freedom 1. Or at least a plugin system. But no... You have to give the user the source code even if there is a plugin system. And here is why...
The only way to combat malware is to delete it entirely
Let's say I have Google Chrome installed. It's malware, but it has a plugin system with tons of plugins. I can extend the program to do what I want. But can I remove things it does that I don't want it to do? Some plugins could theoretically provide you with such ability. Ad-blockers block ads. But what if there is a function inside of Google Chrome that is malicious and that is not accessible to the plugins to turn it off?
The question is, what if there is anything the user cannot alter with plugins? Well if you have ever written a program that supports plugins you will know that it's not simple. And you have to provide API calls that the plugin uses to interact with the rest of the program. Blender calls this API "BPY". Emacs has a modified version of Lisp programming language to achieve something similar.
Not all parts of the program have API calls. Only important things. Some have more exposed parts. Some less. But no program has all of it exposed to the plugin. So at least some of the program is not alterable by the plugin. And this something could be the core of the user's annoyance. The "malware" so to speak.
So it is by no means enough to provide only a plugin support. You have to also provide the whole source code of absolutely everything that goes into making the program work. Or at least the parts you did, if other parts source code is available elsewhere.
And because malware is anything that the user might not like. Not only spyware. You cannot just provide the source code of potentially malicious things. That you think are potentially malicious. It has to be the whole thing.
Copyright
In most countries user modifying software source code, even if provided, is still an illegal activity. When you write source code, by law of many countries, you are getting automatic copyright on the written piece. Unless you copy it from a different piece. Than it's illegal, since it's probably copyrighted to some other person.
To deal with it, a lot of programmers give users a copyright license to modify the software. Without that, the availability of the source code alone is not enough.
People who cannot modify software or write plugins
To be able to control what the program does, you should be able to understand how to change it. If there is a plugin system, you should have a manual of how to write plugins for it. If there is source code, it better have comments in it. Or at least the source code should not be written in a way that tries to confuse you.
But many computer users are not programmers at all and telling them to write themselves a plugin or modify the source code is like telling them to go away. So a good thing would be to have a large quantity of available plugins. The more the better.
But since not everything could be changed using plugins, another thing is also important. Forks.
Fork is a term used by software developers to describe a split in direction of development of a program. Let's say two programmers disagree what the program should do or not do. So one programmer does his own version and the other, his own. This split in the middle we call a Fork.
Having many forks looks from the outside like a bad idea. But in fact it's one of the best ideas. Forks make sure that users of software can choose between already pre-made modifications. If the user is not a programmer, availability of forks insures that user has at least some freedom in deciding what the software does and not does. So the more forks the program has the better.
Also, I have to point out that those forks could be illegal too. There was an app called YouTube Vanced which was a fork of YouTube for Android app. They removed the malware of annoying ads. Some don't consider it a malware. But as we already established, malware is subjective. Anything a person doesn't like with a program is malware. And those people didn't like ads in the YouTube app. So they removed it.
Unfortunately for them, YouTube didn't give them the copyright license necessary to create the fork. When you fork, you copy something under copyright, modify it and then also re-distribute it. This means that you break the copyright in 2 places. And each of them requires it's own permission from the copyright holder.
Copyright in software goes against trustworthiness of software. If the programmer doesn't allow either modifications or forking, this software is automatically untrustworthy. Because all of the above mentioned points, apart from maybe plugins, are impossible without a proper copyright license. Maybe we should look at software licenses and their terms more in depth. Maybe software licensing matters a lot in figuring out if software is trustworthy or not.
But my take on it is this. If we have enough people changing code and enough people maintaining forks. And those forks do not disappear due to legal issues, therefor we can assume that the license is sufficient enough for the project to be trustworthy.
Actual criteria for trustworthiness scale
So far I explained why Free Software as defined by the Free Software Foundation tends to be more trustworthy. But we want to arrive at a more mathematically useful definition that will yield a number for each piece of software that could be used to compare one trustworthiness to another.
I think the following variables hold merit to calculate:
Amount of individual contributors. When software is developed by more than one legal entity ( more than one person, or more than one company ). It tends to be more overlooked from the perspective of the other person involved. Within one company an agreement could be made to not tell people about certain malicious features put into a program. But if multiple unrelated companies look at the same piece of software. And when regular users can look at it as well. ( By "look at it" I mean, see and modify the source code ). Malware seems to flow out to the surface and get deleted.
So the more individual contributors the software has, the better. Blender for example, according to BLENDER.ORG, has about 725 separate entities that were involved with it's development. Microsoft notepad has 1, the Microsoft Corporation.
By only this statistic Blender already seems to be 725 times more trustworthy than Microsoft Notepad. But quite frankly this is not enough. We have 725 times the staff to find problems in decisions of the other staff in one thing. But what if you also have 725 times the stuff to go through to find those problems. Microsoft Notepad is a rather small program. While Blender is a behemoth. What if Blender is truly 725 times the size of Microsoft Notepad?
So amount of individual contributors should be adjusted by either: lines of code, or megabytes of program. Or there could be introduced methods of estimating lines of code in proprietary programs from their sizes in megabytes.
By trying to look up on the internet I found that Microsoft Notepad is roughly 15 megabytes in size. Perhaps I'm wrong. Maybe it's smaller and I looked on some shady website. I don't want to install Windows just to check the size of this useless editor. I have for god's sake Emacs installed. So in one place it claimed that it was about 15 MB.
Blender 3.30 weighs 877.4 MB. Almost a gigabyte. So if we adjust amount of contributors to the size of the finished program we get this numbers:
Contributors / Megabytes
- Blender : 0.8263049920218829 ( 83 % )
- Notepad : 0.0666666666666667 ( 6.7 % )
Notepad is only about 8% trustworthiness level of Blender. Even adjusted by the size. But that's only the first metric. And only if we do it by megabytes. Something small like dedicated software for professionals, like Blender or Notepad could be reasonably evaluated using megabytes. But something like games for example, cannot be. Two games with the same amount of source code could have drastically different amount of art in it. Art is not software so it usually doesn't posses any threat of being malicious. It does nothing on it's own. But to read this art, the game needs software. And what we care about here is only the software. So a much better statistic would be to compare lines of code.
Trying looking up lines of code from Notepad resulted in stupid search results. Well, the program is proprietary and nobody seemed to care to take a look at how much lines of code there is, let alone publish this "valuable" information.
I think I know what I gonna do. I will find a program with similar functionality to Notepad, of which I know the amount of lines of code. And I will compare their compiled sizes to estimate lines of code in Notepad.
There is, weirdly enough, 81466 lines of source code in GNU Nano. A very basic, terminal based text editor. Which, to be fair, contains a lot of things like syntax highlighting, which Notepad doesn't have. But let's use it, since we gonna adjust it by the compiled size anyway.
The compiled tar.gz package of Nano ( which is similar to an install-able package of Notepad, which I found to be 15 MB ) is about 1.3 MB in size. Which makes me question everything from now own, but okay. If we divide 81466 lines of code by 1.3 MB we will get the mount of lines of code in each Megabyte. Which will be, according to this grossly oversimplified test: 62666.153846153844 lines of code per megabyte. Multiplying this number by 15 MB ( the size of Notepad ) gives us: 939992.3 estimated lines of code in Microsoft Notepad. ( Almost a million? What the hell is even going on here? )
Blender has 2475092 lines of code. Using similar math as previously with megabytes:
Contributors / Lines of code
- Blender: 0.0002929184046492009 ( 0.03% )
- Notepad: 0.0000001063850165581 ( 0.00% )
Now it looks like Notepad is even lower than if I would compare just sizes. Still it's weird what I found. First, what the hell does Nano need that much code for? And why Notepad is so large? Did I see some wrong number? Maybe the site that I visited didn't know how much Notepad weighed themselves. And perhaps if I'd click "Download" I'd get some Trojan Horse infected version. But if it's correct and it's really 15 MB. Maybe there is a Trojan Horse in it after all, by default. One developed by Microsoft themselves. And those estimates include only speculations on the amount of people joined to work on the damn thing. I didn't even start with other things.
Amount of plugins / settings / forks. We already looks at why plugins and forks are necessary for the program to be trustworthy. Settings are a natural extension of this. KDE for example let's people control so much through simple settings, that it's equivalent to editing the source code of Gnome. So we need to count all 3.
Blender has about 1844 UI accessible settings. It also has about 1555 easily find-able and install-able plugins. It's rather unimportant to calculate strange plugins that are not in some easily find-able database. If the developer of the plugin didn't publish it properly, that the user might not ever find it, for the sake of this calculation, this plugin does not even exist. Adding those plugins found preinstalled with Blender with plugins found on Blender Market, I arrived at the number of 1555.
Forks are a little bit weird. Blender doesn't use Github for development. They have their own git website where they hold only Blender related projects. This is not really a big deal. But that means that forks of Blender exist somewhere else. Technically speaking they do have Github. Which is mirrored from their main git repository. Which allows many people fork it there. And if we look at the number of forks Blender has on Github, you will get a number like 1203.
This number, though, is nearly useless for us. From those 1203 forks only a handful changed Blender functionality enough to be significant to the user. See, when a contributor wants to contribute a change, they need to fork the project, do their change in their own fork, and then present their changes in a form of a pull request. To which main blender developers either agree or not. A lot of people fork projects because they
might start at some point looking at the code, or changing something. Not because they have a particular change in mind. So most forks just sit there unchanged what so ever. Or the few that are changed are getting merged back to Blender. So they are not actually forks, but rather copies of the main project.
What we are looking for really, are not actually forks but rather distributions. GNU / Linux, for example, is famous for having tons of different distributions. Debian, Ubuntu, Arch... and the list goes on an on. All those distributions are technically forks of other distributions. That are in themselves forks of the GNU operating system. And sometimes forks of the Linux Kernel too.
I found by digging online ( I know a very scientific method ) about only 5 distributions of Blender. There is
Bforartists which is like a rearrangement of Blender's UI to make it more comfortable to the developers of the distribution.
Fluid-Designer which is a Blender more tuned to do architecture? I suppose. I don't know. Their github page is not very helpful. And the name is confusing.
Mechanical Blender which is a CAD program based on Blender.
UPBGE which is a very clever marriage of Eevee real time renderer and the old Blender Game Engine into a modern game engine. And there is also
Armory Paint which is a big change to blender. They remade everything to make Blender into a program that just paints textures. Something like surface painter. But with Blender's UI as the base.
About the same categories for Notepad, I assume it has zero plugins and forks since it's a proprietary program without plugin support. But how many settings it has, I'm not aware. And I don't have the means of calculating it. Perhaps someone can fill in the details about it in the comments below.
Though look at it again. Notepad apparently has 0 plugins, while Blender has 1555. Does it mean that Blender 1555 times more trustworthy? Or should we adjust these numbers by some size value? Or perhaps we have to use a logarithmic scale to compare the two... This is something I'm struggling with to understand.
Years from release. This is very interesting one. Programs are never perfect. And the more time they are used the more polished they get. It has little to do with freedomness of the software. But if we calculate trustworthiness, we might as well include a metric for things like "how little bugs it has". For example, there could be a web browser that claims to be very secure by having good built-in blockers. But what if those blockers have exploitable bugs? With time, these kinds of issues are ironed out. Therefor the more the program has years the better.
Though we get into another pit-hole. Do we calculate years? Maybe days since the first release? Minutes? Seconds? Maybe version number. Or major versions? What is the correct compatible scale here?
I know that 1 mile is the same length as 1609.344 meters. And one kilometer is 1000 meters. But I don't know how much "trustworthiness" is in one year from release. Or how much trustworthiness is in one more contributor, or one fork. These are wildly complicated things to calculate. And how even should one judge it? But it doesn't mean we should not collect the data what's so ever.
Lines of code. This one is strange. More lines of code generally means more features. Therefor more settings, more stuff you can do with the piece of software. But more lines of code also give more bugs. With each line of code you increase chances for bugs to occur. So what do we do with this metric?
Perhaps we divide lines of code per bugs. To get a rough estimation of how much lines of code there is until the next bug is found. The more lines of code between bugs the better. Blender has 2475092 lines of code. But how many bugs does it have?
We cannot reliably calculate the real amount of actual bugs in any piece of software, since a lot of them are subjective to any given user. Even some "bugs" that people report to blender developers are simply what's called "limitations". Though they are technically bugs. Since it's something a person doesn't expect to happen when doing a particular task. So how do we know all the bugs?
Also there is a problem of not having bugs found yet. Blender has 2.5 million lines of code. There are probably still unexplored areas of Blender. Combinations of inputs that will result in a undesirable effect. To find all of these you will have to brute force the entire program. And with 2.5 million lines of code, I will say it's harder than breaking into NSA.
But what if we ignore
all bugs and only concern ourselves with
known bugs. Blender has a public repository of user specified issues. And there are 4185 open issues standing currently unresolved. So Blender's code clarity is roughly 591.4 lines of code per bug. Okay...
What is the metric for Notepad? Well is there a publicly available repository of bugs of Notepad? By looking up online I can find a few mentions of bugs of Notepad. Usually by users in forums asking questions of how to troubleshoot this or that problem. But how can we reliably calculate all of the bugs this way?
Typing in searx ( default settings ) the words "windows notepad bug" gives me 42 answers. Perhaps this is the ultimate question for the ultimate answer of the live the universe and everything. But it doesn't help us. Does it have only 42 known bugs? Or is the search engine not trying hard enough? Or maybe I found the same bug twice? Just to compare to Blender, I wrote "blender bug" into the same search input and got an error saying that nothing at all was found. While I know that there are at least 4185 open issues in the official bugs repository. So should we take 42, divide by 0 ( of blender ) and then multiply by 4185? Well.. this seems to give us anywhere between 0 to 4185 times infinity times 42. Mathematicians are still not sure what happens when you divide by 0.
Of course we have this issue because Microsoft doesn't want ( for PR reasons ) for people to find issues with whatever they make. But I think if we calculate trustworthiness, openness about how many bugs were found in any particular piece of software is crucial. I can try estimating Notepad bugs by comparing it's size or lines of code to something similar, perhaps GNU Nano again. But then the previous time we did it, it didn't do much good.
Conclusion
Blender has following numbers:
- Amount of contributors: 725
- Amount of settings: 1844
- Amount of plugins: 1555
- Amount of forks: 5
- Amount of years: 11
- Lines of code: 2475092
- Open Issues: 4185
Fast LBRY GTK has these numbers:
- Amount of contributors: 2
- Amount of settings: 15
- Amount of plugins: 0
- Amount of forks: 0
- Amount of years: 1
- Lines of code: 4340
- Open Issues: 22
Microsoft Notepad is weird since it's proprietary and estimating it's trustworthiness is very hard:
- Amount of contributors: 1 ( Microsoft )
- Amount of settings: Unknown ( Please tell me in the comments )
- Amount of plugins: 0 ( Presumably )
- Amount of forks: 0
- Amount of years: 39
- Lines of code: 939992 ( Estimated )
- Open Issues: The Ultimate Answer for The Live The Universe and Everything...
Maybe we can ignore all of the things that we cannot find about the program. For example, Microsoft didn't setup an easy way to calculate it's bugs, so we estimate that there is a bug on each line of code. But then Microsoft didn't release the true number of lines of code either. So do we just give it 0 points on anything touching this metric? This becomes less scientific and more game-like. We just throwing rules out of thin air at the game and trying to see what sticks. Instead of actually investigating the way of getting the best approximations of these values that we are missing.
I chose Microsoft Notepad for this, because it's rather small and insignificant and people trust programs that are small because they are small. But as we can see from the exploration so far, perhaps trusting those programs is a bad idea, no matter how small they are. Still the question stays: How much of a bad idea is it really? Like exactly what are the numbers?
Okay... Let's imagine a perfect scenario where we do have all of the correct numbers. We have the lines of code and the amount of bugs. And everything we need for every program. How do we combine all these number together into a single number on a trustworthiness scale?
Again, how much contributors weight in trustworthiness compared to quality of code? How much settings weight in trustworthiness compared to forks? I don't know. I don't even have a faintest idea of how to start figuring it out. Perhaps we can estimate the ranges of each of them as how much one effects the other. This could theoretically work.
For example we can collect precise data on a million applications. This would be not an easy task, but it's doable. And then we can look for patterns as how much one metric effects another metric on the list above. We might find that on average for each line of code there are X contributors. Or if we divide lines of code per contributors we get x amount of bugs per lines of code. And so on and so forth. This intense research project can probably identify some ways that we could perhaps combine these metrics into one trustworthiness scale. But that's a lot of work.
No wander it's simpler to just argue about it politically rather than scientifically. Software is such a weirdly wild field that calculating it's trustworthiness becomes a huge undertaking. Even the simple fact of calculating one program's data-points, in this case Blender, took me a lot of time. Imagine doing it one million times over.
As I said in the beginning of this article. I don't know how to calculate the trustworthiness of software. I have some ideas to the direction of this. But this is not in any form useful. At least not yet. I hope it might be useful because some of you are smart enough to troubleshoot various problems with my approach. Perhaps one of you can get a Nobel Price from discovering how to reliably calculate trustworthiness of software. But that's probably not going to be me. I have a cartoon to make. And then a game. And then another cartoon...
Happy Hacking!