XenOS wrote:
Once again, I am not talking about the frame buffer here, but about the actual library code. Rendering bitmap fonts requires a lot less resources that, e.g., rendering some GUI with widgets and anti-aliased vector fonts. If it is built into the firmware, it does not even require any additional memory use, whereas a graphics library obviously needs to be loaded somewhere in RAM in addition to the firmware that is already in place anyway.
Win95 works with 4MiB of RAM, so how much is a lot?
If Qt is bloated, then that's a problem with Qt.
XenOS wrote:
It can be any reason, and the point is that a tool such as grep does not even need to know the reason. It just does the search for me, and I don't have to tell it why.
Of course it can be any reason, but providing the context allows the tool to do a better job.
XenOS wrote:
What if the string I am searching for is not a function (or variable or class) name, but something that has nothing to do with the program syntax at all? Say I notice a typo in the output of my program, and I want to know in which source file that particular string is located which has the typo. If I have an IDE that can search just for identifiers, it will not help me at all. All its great features, like context aware editing, are completely useless to me in this situation. Yet they take resources - obviously an IDE has more program code, needs to handle more cases, uses more memory than just running grep.
For the typo you'd be looking for a string with said typo, still less to search than searching everything.
And you shouldn't compare IDE to grep, rather the tiny part of the IDE that performs searches, that shouldn't be significantly bigger or smaller than grep.
Also, with grep you usually do provide some context, for instance which file(s) to search. I'm suggesting providing more context. For instance if the search is done at the OS level, something like:
- the typoed word
- that it's a C++ string
- project name (in the IDE this would be implicit)
The last two might be in different order, or it might be that the result already popped up after writing the typoed word, if the search was fast enough.
I want to start with the full data set and then give just enough info to narrow it down until I get the results I want. I can't effectively do that with grep, though I do do it. What I want is something better than grep.
XenOS wrote:
If I want to know whether my string matches the links on a website, I just run it through grep. If I want to know whether there are short links, I just look at the source. There is no need to rely on "hope" if I can just test and see.
What if I want all the youtube video links, that is, links that result in a youtube video, not just the links that happen to have the word youtube on them?
And I can't just "test and see" if it's a tool I run automatically.
XenOS wrote:
It is optimized to do exactly that - do a brute force search as fast and efficient as possible. Which is exactly what I want in the mentioned use cases.
But why do you want that? I'm guessing you have an actual reason for each of those grep searches you do, and in each case if you could provide better parameters to grep then it could do the job better, such as faster, less memory, more narrow results.
XenOS wrote:
That's a problem with the editor. I never had problems with large files and VIM.
Vim is one of the better ones, but it's not that good. I just tested opening a 300MiB text file on OpenBSD and it took around 10 seconds, not instant. Available memory dropped by about 150MiB.
I did a second test with a 2.3GiB file, it took ~90s.
I'm guessing the first thing vim does is to go thru the entire file to figure out how many lines there are, because the file doesn't know it. Once started I was able to jump to any line instantly, so I'm guessing vim keeps a list of where the lines start.
Most programming languages have strings that know their length, so the length was known when the file was created, but that key piece of info got thrown out, only to be recreated when the file is used, over and over again.
XenOS wrote:
And if the file is too large to open the whole file, which usually happens only with auto-generated files such as log files, nothing I would write or edit by hand, I ask myself why should I even open it in an editor? Or read / display the whole file? If I just want to extract some specific information, I use - once again - grep.
So with a log file, how would you find all instances of exceptions thrown from a Java program where the stack trace has a specific method call and that happened during a specific time period, say 1.1.2000 between 15:00 and 17:15?
And why is it difficult? Because all the semantic meaning of the data (dates and exceptions) has been thrown out and has been reduced to magical text, where the file doesn't even know how many lines there are, and the lines don't even know their own length.
If the semantic meaning was still there, then it would be easy to search it.
Have you ever had to correlate multiple log files from multiple systems? Not fun, depending on the complexity I might end up piecing them together in some type of notepad. That's actually something that humans are bad at, but computers are good at, yet in practice I end up doing it manually.