Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

64bit coordinates , overkill IMO #1113

Open
dobkeratops opened this issue Dec 29, 2017 · 6 comments
Open

64bit coordinates , overkill IMO #1113

dobkeratops opened this issue Dec 29, 2017 · 6 comments

Comments

@dobkeratops
Copy link

dobkeratops commented Dec 29, 2017

IMO 32bits are the most sensible default for coordinates. Without meaning to be critical (it's a great library, EDIT and i see a typedef so I could always swap it in a fork) I hope the following general discussion will explain why seeing f64 GUI coords triggers my intuition in a negative way:-

  • Rust being efficient can run in many places (IoT embedded displays Conrod on Beaglebone Black or other SBC? #997, Raspberry Pis#907 /microconsoles, etc; ), not just programmers own big desktop PCs, artist workstations. The philosophy of 'doing more with less' rather than being a gas-guzzler, which means a wider audience for you code. Think of batteries and solar panels rather than your present day 1st world grid running on fossil fuels.

  • if managed sensibly (e.g computations within nested windows in 2d, or computations relative to local centres in 32), 32bits works well enough for most applications. People have done games where you fly between planets and land on surfaces, using 16bit coordinates :); as I understand most layout is algorithmic/ tree-like anyway so a user might not have to think about this sort of thing directly

  • if you think 'moores law' means small types become irrelevant, remember AI workloads (which demand more samples) are driving better support for f16 and even 8bit precision, and I hope we'll go back to being able to manage precision better in graphics code (e.g. texture blending doesn't need 32bit, various steps of subdivision dont etc). Software and hardware bounces off eachother. As software developers, By demanding/supporting intelligent precision tradeoffs, we're more likely to get intelligently designed hardware in future.

moore's law shouldn't really mean 'yeah we can be wasteful, doing the same things is easier' .. it means computers can get smaller and into more places (wearables), or handle things they couldn't do before' (like AI)

-Remember the hardware tending toward making auto-vectorisation easier (ie.. vgather) which will mean half the precision is routinely double the performance (double the number of loop iterations stuffed into the same register size. This is part of why I'm always talking about 32bit indices aswell. (again, seeing people defend just using 64bit everywhere in this community is triggering me) thats another tangent. 32bit indices and 32bit floats slotting into the same fields in these vector registers, with 64bit base. e.g. think of graphics code. you wont have 1 u8 array filling memory.. you'll have an array of textures arrays of coords etc. Needing >2billion items is a rare case you can handle seperatly. it's more like 100s-millions of items. There is of course compiler work to refine to make this a reality, and we can help drive that with new languages :)

  • (and if we eventually get pervasive f16 support even in CPUs, in your cases where you can reduce to 16bit coords and indices you'd get 4x performance..)

  • what kind of GUI would need 64bit precision... a good gui uses context-sensitivity presents the salient information in a visually simple way instead of filling the screen with hundreds of widgets to visually search ... so you might argue that you want to be able to scroll through some huge page of controls, but I would argue that's a badly designed GUI :) but if you need it the library should still be able to handle that accurately in 32bits through windowing

At a simple level my own code predominantly uses f32 so I might end up with awkward conversions (admitedly mixing 3d/gui coords might not happen, but it would be nice to , say , bring up pop up menus on clicks into the 3d scene or have infoboxes/editors coonnected to 3d objects by arrows)..

I do use a paramterized vector library (and of course I realise that has compile time cost )

Would it be possible to parameterise the coordinates to allow user choice.. ( I realise that introduces syntactic cost;

Tangentially I think 'module-level type parameters would make this sort of thing much easier.. thats a language RFC rather than a library feature. r.e. compile times, I would hope 'module-typeparams' could close the gap between 'a type-param' and simply having a global 'type MyScalar=f32 / f64' )

-(EDIT , ah I see there is a typo Scalar=f64 , so it might be possible to change in a fork, but i just wanted to get peoples thinking on this)

@Bobo1239
Copy link

@dobkeratops Just fyi: There's rust-lang/rfcs#424 but there hasn't been much traction till now.

@dobkeratops
Copy link
Author

dobkeratops commented Dec 29, 2017

There's rust-lang/rfcs#424 but there hasn't been much traction till now.

thanks for the link, that is exactly what I have in mind as a solution to this sort of thing. I do miss the ability of nested classes in C++ to share type-params (which can help), but parameterised rust modules as described in that RFC would be vastly superior

@daboross
Copy link
Contributor

I feel like there are so many unoptimized things about conrod at the moment that switching f64->f32 would be a meaningless difference. Sure, it could save a few bytes, but what's the point if nothing else changes to make things more efficient.

I can agree that eventually conrod will want a feature for switching to 32-bit coordinates, but it seems premature at the moment.

@dobkeratops
Copy link
Author

dobkeratops commented Dec 29, 2017

" so many unoptimized things about conrod at the moment that switching f64->f32 would be a meaningless difference"

... but doesn't mean you have to wait for the other things to be done to fix that. It's such an easy fix , and if it does produce precision problems: good: it means you have to fix them algorithmically.

so yes: i'll call it out; it's not premature optimisation; the use of overkill precision might have a negative effect on other architectural decisions (the ordering of layout calculation/traversals; and how people plug code into it) - you might start relying on it to do things by suboptimal means - which if you stick with, will be harder to fix later.

what might be premature though is getting it to use 16 bits :)

@daboross
Copy link
Contributor

daboross commented Dec 29, 2017

That sounds reasonable. I guess I still wouldn't prioritize it, but if someone were to change it it wouldn't be too bad.

@pedrocr
Copy link
Contributor

pedrocr commented May 30, 2021

Isn't this a duplicate of the already closed #144?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants