XenOS wrote:
LtG wrote:
The tool wouldn't be significantly more complex, just some interface for allowing new formats to be understood by the tool.
That's exactly what makes it complex - you need to add something for
every format you want to handle.
That's not the same thing as complex. Plus, you already have to have it, otherwise the recipient of the data wouldn't be able to understand it. I'm suggesting we also share it with the search.
XenOS wrote:
LtG wrote:
I'd rather use the whole IDE, granted most (all?) of them are slightly bloated, but I'd still rather get all the benefits the offer.
That's a matter of choice and use case. For my use case IDE would not have the benefits I actually need.
So you're not using any IDE? What are the benefits you need that IDE's don't meet? Have you tried good IDE's?
XenOS wrote:
Of course I need an editor and a compiler, but I don't need a GUI for either of them.
So a CLI that brings compiler and editor together and an IDE that brings them together with the same RAM/CPU footprint is bad? If it makes even one thing faster/easier/more convenient, then it's a win. After that, keep on piling the wins.
I'm not saying it can't be done on the CLI, but rather that we should strive for something better. And given that there are millions of programmers, we get something similar to compound interest.
XenOS wrote:
LtG wrote:
Because you want control over it?
If I want control over it, it's not automatic. Your demands on the tool contradict each other.
No they don't. Tell me about a single Linux tool that doesn't depend on UI (config files, prog args, etc) to some extent. Of course I want control over a tool.
XenOS wrote:
LtG wrote:
First your argument was that vim can handle large files, now it's that when it can't, those files are too large and it's the files fault. So by definition vim is perfect, because when it fails it's somebody else's fault.
You're mixing things up. I said that I had no problems when we were talking about filed of "a few MiB". And even those I rarely open with VIM or any other editor unless I have to. But several
100MiB or GiB in an editor is a doubtful way of solving some task.
You said there's no problems with vim and large files.
Solving a task by looking at logs of GiBs is a doubtful way? Large enterprises have large logs.
XenOS wrote:
LtG wrote:
You call that simple? I call that convoluted. Assume it's a 1GiB log file, how many times does your proposed solution go thru all the data?
Once. Only the first filter stage goes through all the data. The second stage goes through the output of the first stage (which is significantly less, if I order the filters in such a way that the first one keeps the least amount of data), and so on.
I thought sed (to remove Java stack trace line-feeds) goes thru the the entire data set, then grep goes thru all of it again? Wastefully.
XenOS wrote:
LtG wrote:
For a once off search of something I have to write a Perl script? And you don't find anything wrong with that picture?
I don't have to write a completely new Perl script for every single search. As I said, it's an alternative method. I'd write a Perl script if I have to deal with similar searches another time, just with different search criteria (date, time, function name...).
I said once off search, yes, I've created and used (created by others) "incantations" multiple times, but I've also been in the situation where I need to do it just for once. I've also deemed it reasonable to write a C program to go thru the data because it was easier than try to figure out how to get the same thing done with awk/sed/grep. Like parsing a routing table and streamlining it (combining subnets if the super-net has the same next hop, etc).
The routing "engine" already knows if they belong together, yet I have to do a poor mans version of the understanding/semantics to get the job done. I just don't have access to it, so I have to recreate it, and due to practical concerns I have to cut corners.
XenOS wrote:
LtG wrote:
Aside from that, it seems your argument seems to be that because it can be done in a CLI, then there's no reason for a GUI (or anything better).
That is neither what I said nor my line of reasoning.
If the CLI has advantages over the GUI, then there is reason to use the CLI instead of the GUI. And I mentioned such advantages in my previous posts (less resources etc.).
So do you use a Linux system without a GUI? If not, then the resources are already expended anyway, which is part of my point. Using a CLI on top of that doesn't save resources.
A search for "typoed" would still be written in text, but if I can be more specific (eg. pertaining to project X, within a std::string, etc), then I'd happily reap the benefits.
I'm saying a CLI itself brings nothing to the table. It only restricts in a bad way.
XenOS wrote:
LtG wrote:
By that line of reasoning we could all use brainf*ck the programming language, I wouldn't. I want something better.
By
my line of reasoning you would use brainf*ck if it has some advantage over other languages, for the particular use case.
But earlier you were using grep as useful in the _general_ case, not specific (particular).
XenOS wrote:
If that suits your task then go for it. That's not my use case, though.
LtG wrote:
If we reduce CLI to mean textual, then the title question of this topic is:
Can you live without a GUI (including browsers)? For me that's a no.
Can you live without a CLI? Yes. If the necessary changes were made I'd do so happily.
My answer would be the opposite. Again, for
my use case.
In what way is it your use case? Specifics help me (us) to better understand what is needed from a system.
And honestly, replying to forum.osdev.org, can you say you don't need a browser?
XenOS wrote:
Of course, but that's neither an argument In favor or against a GUI. I can use, e.g., VIM with something like YouCompleteMe and semantic completion, so it will do all these things in a pure TUI as well. Of course it needs more resources than just a plain text editor + grep. And it does make sense to go this way if one wants to work on a project and perform several operations on this AST. But that's not the use case I mentioned as an example what I use grep for. My use case was a single search, and for that I don't have to parse all source files into an AST, even those functions which are completely
YouCompleteMe does it exactly (more or less) as any IDE does it, now that overhead is acceptable?
For your use case for a single search you have to provide grep with the relevant context on _how_ to search for something. That knowledge already exists, you shouldn't have to duplicate it.
As for the AST, that too already exists (intermittently), why can't it exist all the time and be taken advantage of?
XenOS wrote:
If I have as many resources as I want and need to use such a tool, then of course it does not matter how many resources this tool needs. But if I work in a restricted environment - devices with low memory, working over a slow internet connection - resources do matter, and it might simply not be possible to run such things in the background. But again, this has nothing to do with the GUI vs. TUI question.
How much are those resources? Are they unreasonable? For a "average" C++ project I'd guesstimate them to be in the order of MiB's. When dev'ing, I don't work over a slow Internet, so a moot point. I'm not suggesting we should dev on an IoT. My laptop has 64 GiB of RAM, my next laptop will have same or more.
Granted Google mono-repo (good or bad) has billions of lines of code, but I would argue that it's largely redundant, and also that their devs are able to deal with it with normal laptops. So it's a non-issue, even for one of the largest code-bases in the world.
In addition, because of structured code (as opposed to magical text), I would argue the size is smaller, the same names (variable, function, etc) don't need to be repeated in memory.
XenOS wrote:
That's exactly my point. One day I might be searching for functions being called, and of course I can have some IDE that creates an AST which answers this question faster than grep. But on another day I might be searching for links in a HTML file. Of course I can also parse that first into some kind of markup AST, but I need a different parser for that. Yet another day I might be searching a log file, and there might not even be any parser for that. That's why want a generic tool that solves these tasks by reducing them to a common, low level task, and in a minimal, resource limited environment that generic tool might be all I can use instead of specialized parsers for every single task I might encounter.
HTML = "some kind of markup AST"
I'm suggesting only using "some kind of markup AST", so we reap the benefits. No parsing required. Resource constrained also love it.
XenOS wrote:
Schol-R-LEA wrote:
It is a good deal more sophisticated - and more resource-intensive - than either of you seem to realize. I know that this may not sound relevant, but my point is that the 'simpler' tool in question is anything but, and both of you are basing part of your argument on the same fallacy; even if you are both using it for opposing points, the fact remains that it is a fallacy.
I'm not saying that tools like grep use no resources at all. I just avoid
unnecessary resource usage (such as displaying a GUI) when they can be used more wisely (such as the actual search task grep or any other program performs), and I don't have unlimited resources to do both.
Avoiding "unnecessary" resource usage may be a form or premature optimization. By not using a AST every IDE/compiler/editor will have to understand C++ (to a point), instead of indexing your HDD you have to do a full brute force search (10 TiB vs. few MiB). I'm arguing paying the upfront cost is often simpler and more cost efficient.
PS. I'd also like to thank you too for continuing with this thread, even if it at times it might seem futile. At least we both have to consider both sides.