Proof that RSA works
Proof that RSA works
Due to recent interest by one individual, I looked up the proof that RSA works. Because I always found that non-intuitive, that being multiplicative inverses in a completely different ring somehow meant
Given: n, e, d integers
p, q distinct large primes
n = pq
e ≡ 1 (mod φ(n))
ed ≡ 1 (mod φ(n))
Claim: For all integers m < n: m ≡ m^(de) (mod n)
Proof:
ed ≡ 1 (mod φ(n)) is equivalent to
φ(n) | ed - 1
Since n = pq for primes p and q, φ(n) = (p - 1)(q - 1). Therefore
(p - 1)(q - 1) | ed - 1
Therefore both of these are true:
p - 1 | ed - 1 (1)
q - 1 | ed - 1 (2)
(1) more specifically means that for some integer k:
ed - 1 = k(p - 1) (3)
Notice also:
m^(de) ≡ m^(de - 1 + 1) ≡ m^(de - 1) m (mod p) (4)
But by inserting (3) into (4) we get:
m^(de) ≡ m^(k(p - 1)) m (mod p) (5)
Since p is prime, any integer m must either be coprime to p or be a multiple of p.
Case 1: m coprime to p
In that case, we can use Fermat's little theorem states that
m^(p - 1) ≡ 1 (mod p) (6)
We want to show that m^(k(p - 1)) ≡ 1 (mod p).
m^(k(p - 1)) ≡ (m^(p - 1))^k (mod p) (7)
Inserting 6 into 7 yields:
m^(k(p - 1)) ≡ 1^k (mod p) ≡ 1 (mod p) (for all k)
Therefore equation 5 is equivalent to:
m^(de) ≡ m (mod p)
Case 2: m is a multiple of p
In that case
m^(de) ≡ 0 (mod p)
m ≡ 0 (mod p)
Therefore
m^(de) ≡ m (mod p) (8)
By symmetry, the same constructions based off of (2) yields
m^(de) ≡ m (mod q) (9)
Thanks to a property of modular congruence, when m and n are coprime, a ≡ b (mod m), and a ≡ b (mod n), then a ≡ b (mod mn). In our case, we have p and q being distinct primes (therefore coprime), and we have (8) and (9), and therefore
m^(de) ≡ m (mod pq)
≡ m (mod n)
Quod erat demonstrantum.
Given: n, e, d integers
p, q distinct large primes
n = pq
e ≡ 1 (mod φ(n))
ed ≡ 1 (mod φ(n))
Claim: For all integers m < n: m ≡ m^(de) (mod n)
Proof:
ed ≡ 1 (mod φ(n)) is equivalent to
φ(n) | ed - 1
Since n = pq for primes p and q, φ(n) = (p - 1)(q - 1). Therefore
(p - 1)(q - 1) | ed - 1
Therefore both of these are true:
p - 1 | ed - 1 (1)
q - 1 | ed - 1 (2)
(1) more specifically means that for some integer k:
ed - 1 = k(p - 1) (3)
Notice also:
m^(de) ≡ m^(de - 1 + 1) ≡ m^(de - 1) m (mod p) (4)
But by inserting (3) into (4) we get:
m^(de) ≡ m^(k(p - 1)) m (mod p) (5)
Since p is prime, any integer m must either be coprime to p or be a multiple of p.
Case 1: m coprime to p
In that case, we can use Fermat's little theorem states that
m^(p - 1) ≡ 1 (mod p) (6)
We want to show that m^(k(p - 1)) ≡ 1 (mod p).
m^(k(p - 1)) ≡ (m^(p - 1))^k (mod p) (7)
Inserting 6 into 7 yields:
m^(k(p - 1)) ≡ 1^k (mod p) ≡ 1 (mod p) (for all k)
Therefore equation 5 is equivalent to:
m^(de) ≡ m (mod p)
Case 2: m is a multiple of p
In that case
m^(de) ≡ 0 (mod p)
m ≡ 0 (mod p)
Therefore
m^(de) ≡ m (mod p) (8)
By symmetry, the same constructions based off of (2) yields
m^(de) ≡ m (mod q) (9)
Thanks to a property of modular congruence, when m and n are coprime, a ≡ b (mod m), and a ≡ b (mod n), then a ≡ b (mod mn). In our case, we have p and q being distinct primes (therefore coprime), and we have (8) and (9), and therefore
m^(de) ≡ m (mod pq)
≡ m (mod n)
Quod erat demonstrantum.
Carpe diem!
Re: Proof that RSA works
Thank you! A nice one! But I have to chew on it a bit...
Greetings
Peter
Greetings
Peter
Re: Proof that RSA works
It’s probably easier to see if you plot it on a graphPeterX wrote:Thank you! A nice one! But I have to chew on it a bit...
CuriOS: A single address space GUI based operating system built upon a fairly pure Microkernel/Nanokernel. Download latest bootable x86 Disk Image: https://github.com/h5n1xp/CuriOS/blob/main/disk.img.zip
Discord:https://discord.gg/zn2vV2Su
Discord:https://discord.gg/zn2vV2Su
-
- Member
- Posts: 91
- Joined: Mon Apr 20, 2020 11:02 am
Re: Proof that RSA works
I assume the 'individual' is me. (If not, ignore this post completely) It seems that you have gone through the hassle to prove a very difficult algorithm in response to my post, even though in my post it was asked about how the algorithm is implemented and what is it exactly.nullplan wrote:Due to recent interest by one individual, I looked up the proof that RSA works.
Unfortunately, I can't understand even a single line in this proof. But seeing this in an IT forum like this made me ask some other questions. Is hardcore mathematics necessary for being a good computer programmer? What is the use of these proofs except for logical mathematicians? As for me, I can't comprehend anything harder than 8th grade maths. Is that going to fail my career, as a computer programmer?
(Again, if this post feels irrelevant please ignore this. I think you can dm in this forum, if so just do that to delete this.)
A beginner developer/student. Likes to know stuff. Don't have an OS to put here.
Re: Proof that RSA works
And I thought nullplan means me XD So we both felt addressed! (That's far better than no one felt addressed...)
Your question isn't irrelevant. Mathematics are an important part of Informatics/Computer science. WHAT part of math is neccessary depends on which kind of program you write. In general I think you need to be firm in hexadecimal and bit operations etc. and generally applied math (in contrast to theoretical math). Applied math is what you probably are capable of. You know like calculating and such. I'm not sure if 8th grade is enough for OS dev, but I guess others can narrow that down a bit more.
For graphics matrices are useful.
For RSA obviously modulo and congruence is useful.
And so on...
Greetings
Peter
Your question isn't irrelevant. Mathematics are an important part of Informatics/Computer science. WHAT part of math is neccessary depends on which kind of program you write. In general I think you need to be firm in hexadecimal and bit operations etc. and generally applied math (in contrast to theoretical math). Applied math is what you probably are capable of. You know like calculating and such. I'm not sure if 8th grade is enough for OS dev, but I guess others can narrow that down a bit more.
For graphics matrices are useful.
For RSA obviously modulo and congruence is useful.
And so on...
Greetings
Peter
Re: Proof that RSA works
I don't like math, and I get a long fine coding. You may have to avoid cryptography and graphics stuff, but as long as you know arithmetic, and maybe some binary math ideas, you should be good.
Re: Proof that RSA works
Yeah. I saw this proof years ago in uni and never found it again, so I went for a search for it. Amazing what you can find on the Internet. But your interest in RSA only prompted me to look it up again.pranavappu007 wrote:I assume the 'individual' is me.
Nah. If it were a response I would have responded. This was entirely off-topic in that other thread. Also, I didn't actually create this proof myself, I only wrote down what I understood from things I read on the internet and heard in uni years ago.pranavappu007 wrote:It seems that you have gone through the hassle to prove a very difficult algorithm in response to my post,
Maths is sequential. If you lack understanding of any part, anything that comes later will also be lost to you. So just start over at the top and take in what is written there. Anyway, the maths here is not hard, but some notation is probably not well known outside of number theory circles. The | means "divides". The whole thing basically hinges on the fact that if a product divides a number, that means that both factors must divide that number. For example: 6 divides 12 (that is, 12 divided by 6 is an integer), and 6 is the product 2*3. So therefore, both 2 and 3 must divide 12, and lo is it so! I think Euclid proved that one first, and that was a couple millennia ago.pranavappu007 wrote:Unfortunately, I can't understand even a single line in this proof.
This is hardly hardcore, but you are correct, maths is only a small branch of computer science. Anyway, proofs like this actually help me understand the algorithm better, since I understand the requirements better. Why do we need d to be the multiplicative inverse of e modulo something completely separate from the modulus? Because then this whole thing works out, and otherwise it wouldn't.pranavappu007 wrote:Is hardcore mathematics necessary for being a good computer programmer?
8th grade. That was where the fun started for me. Quadratic functions and how to find their zeroes. A surprising amount of use I got out of these.pranavappu007 wrote:As for me, I can't comprehend anything harder than 8th grade maths.
As with all knowledge, maths knowledge will only help you when you have it. Only when you know certain things will you be able to apply that knowledge when you see an opportunity to do so. And maths, even pure maths like the OP in this thread, comes up surprisingly often. You want to program a SatNav? Probably need to look up Dijkstra's Algorithm. You want to do anything computer graphics related? Brush up your Algebra knowledge, cause it is about to be matrix time. Unless you're like me. I intend to create a GUI system around a quadtree. But that is some way off. Of course you can look up all these things these days, with Wikipedia being what it is, but only when you know the words to search for.
Hexadecimal is merely an abbreviation for binary (hex encodes four bits per digit, octal encodes three per digit, which may be preferable if a byte encodes information in units of three bits. Lots of the x86 machine language uses octal encoding. For example, look at the operand encodings. Both ModR/M and the SIB byte consist of two three-bit fields and a two-bit one, so writing it down in octal is probably better for decoding it). On that note: Can everyone on this forum have some mercy on me, please, and no longer subject me to 64-bit constants written in binary? Thanks. I am not about to start counting the digits!PeterX wrote:In general I think you need to be firm in hexadecimal and bit operations etc.
As for bit operations, yes, they are useful, and having electronics knowledge here is really useful. Do you know how many uses the humble XOR gate has? (Exclusive Or, 1-bit half adder, programmable NOT, antivalence, ...) And all of these can show up in software as well.
Yeah, that is elementary number theory. If you are into abstract algebra, look up elliptic curve cryptography. A group on top of a field!PeterX wrote:For RSA obviously modulo and congruence is useful.
How do you implement your VFS? I want to use a directed graph. That requires some graph theory. In fact, I can also use graph theory to check file systems before using them. How do you store your virtual memory mappings? I will be using a self-balancing tree (not yet decided on the precise one, but red-black is apparently nicer than Anderson for this task) Binary trees, and how to balance them, is part of theoretical computer science, which is a subfield of maths, despite my joke above. At some point you will probably want to use regular expressions, and those again will require an understanding of theoretical computer science, this time of regular languages. Maths comes up more often than you think.nexos wrote:I don't like math, and I get a long fine coding. You may have to avoid cryptography and graphics stuff, but as long as you know arithmetic, and maybe some binary math ideas, you should be good.
Carpe diem!
Re: Proof that RSA works
Try to read Fermat's little theorem, maybe that helps, because RSA is essentially is that theorem put into practice.pranavappu007 wrote:Unfortunately, I can't understand even a single line in this proof.
That depends. It definitely helps a lot in all areas. And if you're targeting areas like cryptography or 3D development, then math in general and trigonometry, linear algebra with matrix operations are a must. You simply can't do cryptography and 3D without math.pranavappu007 wrote:But seeing this in an IT forum like this made me ask some other questions. Is hardcore mathematics necessary for being a good computer programmer?
OSDev is possible without math, but a lot more difficult. For example, without math it's difficult to understand that a scheduler that's O(n) why is a lot worse than a scheduler that's O(1). This is just one example where math is important for OSDev (but of course you could just copy'n'paste someone else's scheduler without knowing what you're doing). Similarly if you want to go beyond VGA teletype console and want to write a vector font rasterizer yourself, then math is a must (cubic and quadratic Bezier-curves, efficient convex and non-convex polygon filling, linear interpolation, alpha blending etc.), or you could just copy'n'paste some code or use an existing library.
Let's put it this way:
"Is hardcore mathematics necessary for being a computer programmer?" - No.
"Is hardcore mathematics necessary for being a good computer programmer?" - Yes.
Well, in order to use it, you don't have to understand the proof per se (of any theorem). It's enough if you know there's a proof that says the theorem is correct, and you can put the theorem into practice by implementing it on computers. But if you understand the whole thing in its full depth, that definitely helps a lot to create a better code.pranavappu007 wrote:What is the use of these proofs except for logical mathematicians? As for me, I can't comprehend anything harder than 8th grade maths. Is that going to fail my career, as a computer programmer?
Cheers,
bzt
Re: Proof that RSA works
Well, for one thing, I haven't gotten that for in an OS. Keep getting stumped at the scheduler. And I can do quite a bit of math, but it just isn't my favorite thing out there.nullplan wrote:How do you implement your VFS? I want to use a directed graph. That requires some graph theory. In fact, I can also use graph theory to check file systems before using them. How do you store your virtual memory mappings? I will be using a self-balancing tree (not yet decided on the precise one, but red-black is apparently nicer than Anderson for this task) Binary trees, and how to balance them, is part of theoretical computer science, which is a subfield of maths, despite my joke above. At some point you will probably want to use regular expressions, and those again will require an understanding of theoretical computer science, this time of regular languages. Maths comes up more often than you think.
Re: Proof that RSA works
Sorry if this is a little off topic. Once I got onto the subject of graphics, I got carried away.
Some quite trivial things come under the umbrella of "mathematics". For instance, O(1) vs O(n): Once you understand the notation, it's obvious if you can count! O(n²) and O(log n) take a bit more experience, but a some thought and a little graphing can help you picture them. Once you've got the graph of an exponential curve (n²) in your mind, you can easily see why some code might seem fast with a small set of data (e.g. a scheduler with few processes,) but become terribly slow as more items are added. A logarithmic curve goes the other way: O(log n) may be fast for n=1 and surprisingly much slower for n=2, but n=21 may be hardly any slower than n=20.
Graphics is a broad field. You can do a lot without understanding matrix math; perhaps everything you really need. I think matrix math is necessary for rotation. It might arguably be necessary for scaling, but I can imagine how to scale an image without knowing anything about it. I just understand 2D arrays and how to average numbers. I can imagine a UI working just fine and looking nice with only blitting (copy/mod pixels) and scaling from source images. Gtk+ 1.2 with its "pixmap" theme engine worked that way. Some people said it was slow, but it was fast enough for me. I had late-90s systems at the time; my best was retrieved from garbage.
Vector graphics (lines & shapes) is kind-of standard equipment in graphics libraries and even drivers, but I don't see it as essential. When it was useful as a memory-saving scheme, I don't think half of today's matrix math had been invented. It certainly wasn't possible to rotate an ellipse with the APIs we had. Nor did most APIs support splines at all. Even drawing programs only supported 4-point Bezier splines at most. In any case, I think implementing diagonal lines alone is easy and gives you enough to make line graphs & some shapes. A very simple algorithm can smooth such shapes. I've thought up (but not implemented) an algorithm to produce spline-like curves with little more math than that. There's a caveat: To produce smooth shapes, I'd want to put a circle on the end of each line segent if it's thicker than 1 pixel. I went and looked up algorithms, (mid-point and Bresenham's,) but these are for drawing hollow circles where I want them filled. I think it would be enough to draw pixels where x²+y²<1, using symmetry as those algorithms do.
I didn't consider scalable fonts. On high-resolution screens, it's quite good enough to scale down large bitmap font characters. See Eagle Mode for a demonstration. However, on low-resolution screens, you arguably need hinting which is impossible to directly provide with scaled bitmaps. You can provide separate bitmaps for small sizes, but you'd lose some scalability. I wrote "arguably" for a reason. Years ago, when I used Eagle Mode on low-res screens, I noticed I zoomed in to make the font size larger than what I normally used. It wasn't bad, but larger text means less on-screen. I read a few whole ebooks that way. These days, vector font hinting is so unsuitable for me that I have most applications set to a very large font size anyway, and usually bold. Comparing Eagle Mode with PowerShell on a low-res screen today, vector font rendering is still a little better than Eagle Mode's best, and its best depends on the exact zoom level. On my 4k 28" (157dpi), I could read in Eagle Mode all day.
I'll admit I do understand a bit of the math for all this. It's just hard to take in when I'm not feeling up to it, and unfamiliar notation is baffling. It's not so hard to pick up a little bit at a time.
I half-way understand there's a caveat with using C's integer division for graphics: it rounds wrongly. It rounds negative numbers toward zero where graphics needs it to be rounded to most negative (floored). This problem goes away if you stick to positive numbers, but there's another which doesn't: it rounds x.5 toward the nearest even number. I can prove this with little technical notation. Imagine a number line with all the tenths: 0.0, 0.1, 0.2, ... 4.3, 4.5, 4.6, ... all the way up as large as you like. Most of these will round the same way under nearest-even and floored rules: x.0 through x.4 will round to x; x.6 through x.9 will round to x+1. Notice that one sequence has more numbers than the other. x.0 x.1 x.2 x.3 x.4 — 5 numbers; x.6 x.7 x.8 x.9 — 4 numbers. Thus, x.5 must always round up to produce an even distribution. C's rules mean sometimes x.5 will round up, sometimes down, so it will produce unevenness. Does it hold true at higher precision? x.00 through x.49 is 50 numbers; x.51 through x.99 is 49 numbers. If x.50 is not floored, it will be uneven. The error is a smaller proportion of the whole, but it's still there. This applies to floating-point division too; Intel FPUs round to even, but the error is more likely to be unnoticeable.
Some quite trivial things come under the umbrella of "mathematics". For instance, O(1) vs O(n): Once you understand the notation, it's obvious if you can count! O(n²) and O(log n) take a bit more experience, but a some thought and a little graphing can help you picture them. Once you've got the graph of an exponential curve (n²) in your mind, you can easily see why some code might seem fast with a small set of data (e.g. a scheduler with few processes,) but become terribly slow as more items are added. A logarithmic curve goes the other way: O(log n) may be fast for n=1 and surprisingly much slower for n=2, but n=21 may be hardly any slower than n=20.
Graphics is a broad field. You can do a lot without understanding matrix math; perhaps everything you really need. I think matrix math is necessary for rotation. It might arguably be necessary for scaling, but I can imagine how to scale an image without knowing anything about it. I just understand 2D arrays and how to average numbers. I can imagine a UI working just fine and looking nice with only blitting (copy/mod pixels) and scaling from source images. Gtk+ 1.2 with its "pixmap" theme engine worked that way. Some people said it was slow, but it was fast enough for me. I had late-90s systems at the time; my best was retrieved from garbage.
Vector graphics (lines & shapes) is kind-of standard equipment in graphics libraries and even drivers, but I don't see it as essential. When it was useful as a memory-saving scheme, I don't think half of today's matrix math had been invented. It certainly wasn't possible to rotate an ellipse with the APIs we had. Nor did most APIs support splines at all. Even drawing programs only supported 4-point Bezier splines at most. In any case, I think implementing diagonal lines alone is easy and gives you enough to make line graphs & some shapes. A very simple algorithm can smooth such shapes. I've thought up (but not implemented) an algorithm to produce spline-like curves with little more math than that. There's a caveat: To produce smooth shapes, I'd want to put a circle on the end of each line segent if it's thicker than 1 pixel. I went and looked up algorithms, (mid-point and Bresenham's,) but these are for drawing hollow circles where I want them filled. I think it would be enough to draw pixels where x²+y²<1, using symmetry as those algorithms do.
I didn't consider scalable fonts. On high-resolution screens, it's quite good enough to scale down large bitmap font characters. See Eagle Mode for a demonstration. However, on low-resolution screens, you arguably need hinting which is impossible to directly provide with scaled bitmaps. You can provide separate bitmaps for small sizes, but you'd lose some scalability. I wrote "arguably" for a reason. Years ago, when I used Eagle Mode on low-res screens, I noticed I zoomed in to make the font size larger than what I normally used. It wasn't bad, but larger text means less on-screen. I read a few whole ebooks that way. These days, vector font hinting is so unsuitable for me that I have most applications set to a very large font size anyway, and usually bold. Comparing Eagle Mode with PowerShell on a low-res screen today, vector font rendering is still a little better than Eagle Mode's best, and its best depends on the exact zoom level. On my 4k 28" (157dpi), I could read in Eagle Mode all day.
I'll admit I do understand a bit of the math for all this. It's just hard to take in when I'm not feeling up to it, and unfamiliar notation is baffling. It's not so hard to pick up a little bit at a time.
I half-way understand there's a caveat with using C's integer division for graphics: it rounds wrongly. It rounds negative numbers toward zero where graphics needs it to be rounded to most negative (floored). This problem goes away if you stick to positive numbers, but there's another which doesn't: it rounds x.5 toward the nearest even number. I can prove this with little technical notation. Imagine a number line with all the tenths: 0.0, 0.1, 0.2, ... 4.3, 4.5, 4.6, ... all the way up as large as you like. Most of these will round the same way under nearest-even and floored rules: x.0 through x.4 will round to x; x.6 through x.9 will round to x+1. Notice that one sequence has more numbers than the other. x.0 x.1 x.2 x.3 x.4 — 5 numbers; x.6 x.7 x.8 x.9 — 4 numbers. Thus, x.5 must always round up to produce an even distribution. C's rules mean sometimes x.5 will round up, sometimes down, so it will produce unevenness. Does it hold true at higher precision? x.00 through x.49 is 50 numbers; x.51 through x.99 is 49 numbers. If x.50 is not floored, it will be uneven. The error is a smaller proportion of the whole, but it's still there. This applies to floating-point division too; Intel FPUs round to even, but the error is more likely to be unnoticeable.
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
Re: Proof that RSA works
Agreed! Graphs are the best to see the trends.eekee wrote:Some quite trivial things come under the umbrella of "mathematics". For instance, O(1) vs O(n): Once you understand the notation, it's obvious if you can count! O(n²) and O(log n) take a bit more experience, but a some thought and a little graphing can help you picture them.
Not really necessary, just makes your life easier and your rotation code faster. Matrix math is also good for effects like blurring, embossing, sharpening etc. See this or this GIMP plugin for example.eekee wrote:Graphics is a broad field. You can do a lot without understanding matrix math; perhaps everything you really need. I think matrix math is necessary for rotation.
And doing scaling right IS difficult, the trivial solutions often produce very very ugly results. Downscaling is often easier than upscaling. Take a look at my easyjpeg library, where my goal was compact code side and portability, therefore I have used the simplest closest neightbour algorithm without resampling (and it can't upscale without floating-point arithmetic, I was lazy I know).eekee wrote:I can imagine how to scale an image without knowing anything about it.
Thanks for the link. I don't want to disappoint you, but EagleMode uses scalable vector fonts via the FreeType2 library. It also supports SVG and PostScript (both being scalable vector graphic formats).eekee wrote:I didn't consider scalable fonts. On high-resolution screens, it's quite good enough to scale down large bitmap font characters. See Eagle Mode for a demonstration.
Only if you don't do it right Granted, using only integer arithmetics in my SSFN library was one of the biggest challange. The trick is, you shift up each number. For example in decimal you would have 0.01, and in integer binary shifting by 6 bits you can store 0.(1/64th) precision. This way you can take care of the .5 rounding easily.eekee wrote:I half-way understand there's a caveat with using C's integer division for graphics: it rounds wrongly.
Cheers,
bzt
Re: Proof that RSA works
Thanks for the links. The diagrams and images in the convolution matrix page make it easy to understand. The other page is pretty good too, having brief, clear explanations along with code.bzt wrote:Not really necessary, just makes your life easier and your rotation code faster. Matrix math is also good for effects like blurring, embossing, sharpening etc. See this or this GIMP plugin for example.eekee wrote:Graphics is a broad field. You can do a lot without understanding matrix math; perhaps everything you really need. I think matrix math is necessary for rotation.
Now I think about it, I'd use the math I know to implement scaling. I was never very good at math overall, but when I was a teenager I played with my calculator and got these mental graphs of how multiplation and exponentiation scale things... if that makes sense. (I wonder how I'd get on with Newton's work; he did all math in terms of geometry.)bzt wrote:And doing scaling right IS difficult, the trivial solutions often produce very very ugly results. Downscaling is often easier than upscaling. Take a look at my easyjpeg library, where my goal was compact code side and portability, therefore I have used the simplest closest neightbour algorithm without resampling (and it can't upscale without floating-point arithmetic, I was lazy I know).eekee wrote:I can imagine how to scale an image without knowing anything about it.
You almost got me there! The UI and file manager I tested with still use the bitmap font. The only quality improvement since I started using it in version 0.72 has been higher-resolution bitmaps. (PostScript is rendered by a separate process.) The changelog doesn't mention anything about vector fonts; no doubt it would if they were usable. The to-do list still contains this item: "Extend the text display capabilities by connecting to standard font libraries (but still continue to support the built-in font)."bzt wrote:Thanks for the link. I don't want to disappoint you, but EagleMode uses scalable vector fonts via the FreeType2 library. It also supports SVG and PostScript (both being scalable vector graphic formats).eekee wrote:I didn't consider scalable fonts. On high-resolution screens, it's quite good enough to scale down large bitmap font characters. See Eagle Mode for a demonstration.
Good point!bzt wrote:Only if you don't do it right Granted, using only integer arithmetics in my SSFN library was one of the biggest challange. The trick is, you shift up each number. For example in decimal you would have 0.01, and in integer binary shifting by 6 bits you can store 0.(1/64th) precision. This way you can take care of the .5 rounding easily.eekee wrote:I half-way understand there's a caveat with using C's integer division for graphics: it rounds wrongly.
Cheers!bzt wrote:Cheers,
bzt
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie