K-ballo changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
nanmiao has joined #ste||ar
K-ballo has quit [Ping timeout: 240 seconds]
K-ballo has joined #ste||ar
nanmiao has quit [Quit: Connection closed]
K-ballo has quit [Quit: K-ballo]
shahrzad has quit [Quit: Leaving]
hkaiser has quit [Quit: bye]
lst_phnx has joined #ste||ar
lst_phnx has quit [Quit: Ping timeout (120 seconds)]
diehlpk_work has quit [Remote host closed the connection]
bita has quit [Ping timeout: 246 seconds]
K-ballo has joined #ste||ar
hkaiser has joined #ste||ar
<gnikunj[m]> hkaiser: yt?
<hkaiser> here
<gnikunj[m]> Had your morning coffee?
<hkaiser> working on it ;-)
<gnikunj[m]> I was porting things to rostam and realized that atomic<bool> doesn't work for host device
<gnikunj[m]> so we can't use that method with async replicate
<gnikunj[m]> I'm looking into a fix in the meantime. You worry about your coffee ;-)
<hkaiser> yah, I can see that
<hkaiser> atomic_flag should work, however: https://en.cppreference.com/w/cpp/atomic/atomic_flag
<gnikunj[m]> aah, ok. Let me replace it with that.
<gnikunj[m]> should be a test and set operation here
<hkaiser> gnikunj[m]: I'm not sure what the device uses
<hkaiser> atomic<bool> should work on the device too, you just can't wrap it in a view
<gnikunj[m]> also, atomic<bool> are not copyable so we can't use it within kokkos view
<hkaiser> yah, sure
<hkaiser> btw, there is no reason to wrap the atomic in the view, a simple bool should be fine there
<hkaiser> use an atomic on the device and bring back the result as a simple bool
<gnikunj[m]> with the atomic flag? yes. Without it, how?
<hkaiser> allocate the atomic on the device
<hkaiser> not on the host
<gnikunj[m]> it gets allocated on the device
<gnikunj[m]> for device execution space, it will be allocated on the device
<hkaiser> no, that's aview
<gnikunj[m]> sure, the view gets allocated on the host but it points to atomic<bool> in the device. Do you mean initialize it within the for loop itself?
<hkaiser> that will not work as you may have more than one for_loop running concurrently
<gnikunj[m]> exactly why I have it outside as a view
<hkaiser> an atomic_view would help, not sure if we have that
<gnikunj[m]> let me think of something. I'll get it working. The bigger problem are the performance tests btw. They use hpx timers and distribution policies. No way that's going to work in a device kernel.
<gnikunj[m]> <hkaiser "an atomic_view would help, not s"> they don't :/
<jedi18[m]> hkaiser: How do I debug why the tests are failing locally? https://cdash.cscs.ch/testDetails.php?test=39644010&build=153323
<ms[m]> gnikunj: did not have a close look at what you're actually tying to do, but you might want to look at this: https://github.com/kokkos/kokkos/wiki/View#651-atomic-access
<jedi18[m]> I tried passing in those command line arguments but that isn't working (the console dosen't display anything and keeps running)
<gnikunj[m]> ms: you're a life saver! This is exactly what I want \o/
<hkaiser> ms[m]: perfect, that's what we need
<hkaiser> jedi18[m]: are you working on min/max, currently?
<hkaiser> as for debugging: just run it locally and step through the code...
<jedi18[m]> hkaiser: Yes I'm talking about minmax, the tests run successfully when I run it locally
<hkaiser> ms[m]: btw, thanks for looking into updating Vc on the docker image
<jedi18[m]> They also run successfully for some of the builds https://github.com/STEllAR-GROUP/hpx/pull/5241
<hkaiser> jedi18[m]: on the CI this test runs on 2 localities
<jedi18[m]> Passing in --hpx:localities=2 as a command line argument would work then?
<hkaiser> not necessarily, it's a bit more complicated than that
<hkaiser> you could use hpx_run.py
<jedi18[m]> Um..how do I use hpx_run.py?
<hkaiser> it's bin/hpxrun.py
<hkaiser> it prints a help line
<hkaiser> if in doubt look at the code should work as well, it's a simple script
<jedi18[m]> Oh ok, thanks!
<hkaiser> in short: hpxrun --localities=2 ./yourapp
<jedi18[m]> Thanks but how do I attach the debugger if I'm running it outside VS through the console?
<hkaiser> ms[m]: also: please go ahead with all of your sender/receiver PRs
<jedi18[m]> Or nvm, I'll try figuring it out
<hkaiser> jedi18[m]: you're using VS?
<jedi18[m]> Yes
<hkaiser> that simplifies things
<hkaiser> VS can run/debug several processes at the same time
<jedi18[m]> Ohhk I tried running it with 2 localities, the test still runs successfully for me
<hkaiser> nod
<jedi18[m]> python hpxrun.py C:/Users/targe/Documents/hpx/build/Debug/bin/partitioned_vector_max_element1_test.exe --localities=2
<jedi18[m]> Why is it failing in the ci then?
<hkaiser> could be a race, might not be your fault
<hkaiser> not sure
<hkaiser> does it fail on all CI configurations or just on one?
<jedi18[m]> it fails on 4-5 of them but runs successfully on the rest
<jedi18[m]> Let me check
<hkaiser> ok, I'll have a look
<jedi18[m]> hkaiser: Oh ok thanks!
<srinivasyadav227> hkaiser: is there any way to know if docker image is updated?
<gnikunj[m]> srinivasyadav227: there should be a last updated time on dockerhub that you can check
<gnikunj[m]> right
<hkaiser> srinivasyadav227: this needs to be merged, first: https://github.com/STEllAR-GROUP/docker_build_env/pull/35
<gnikunj[m]> so we're getting the Vc part into CI as well. Nice.
<srinivasyadav227> hkaiser: oh ok, i am ready with the changes(for datapar compilation fixes), will push them once vc image is ready
<hkaiser> srinivasyadav227: cool
<hkaiser> srinivasyadav227: thanks for looking into this - this is a whole complex of functionalities that need serious work
<srinivasyadav227> hkaiser: ;-), yes sure, will make more progress gradually, just started to getting my feet wet with the code base
bita has joined #ste||ar
hkaiser has quit [Read error: Connection reset by peer]
hkaiser has joined #ste||ar
diehlpk_work has joined #ste||ar
heller1 has quit [Ping timeout: 265 seconds]
jedi18[m] has quit [Ping timeout: 265 seconds]
mi1998[m] has quit [Ping timeout: 246 seconds]
gdaiss[m] has quit [Ping timeout: 246 seconds]
rori has quit [Ping timeout: 246 seconds]
klaus[m] has quit [Ping timeout: 265 seconds]
srinivasyadav227 has quit [Ping timeout: 246 seconds]
sbalint[m] has quit [Ping timeout: 240 seconds]
tiagofg[m] has quit [Ping timeout: 258 seconds]
sestro[m] has quit [Ping timeout: 258 seconds]
pedro_barbosa[m] has quit [Ping timeout: 258 seconds]
gnikunj[m] has quit [Ping timeout: 240 seconds]
jpinto[m] has quit [Ping timeout: 268 seconds]
k-ballo[m] has quit [Ping timeout: 268 seconds]
ms[m] has quit [Ping timeout: 268 seconds]
gonidelis[m] has quit [Ping timeout: 268 seconds]
gdaiss[m] has joined #ste||ar
mi1998[m] has joined #ste||ar
rori has joined #ste||ar
heller1 has joined #ste||ar
sbalint[m] has joined #ste||ar
jedi18[m] has joined #ste||ar
tiagofg[m] has joined #ste||ar
pedro_barbosa[m] has joined #ste||ar
sestro[m] has joined #ste||ar
srinivasyadav227 has joined #ste||ar
klaus[m] has joined #ste||ar
jpinto[m] has joined #ste||ar
k-ballo[m] has joined #ste||ar
gnikunj[m] has joined #ste||ar
ms[m] has joined #ste||ar
gonidelis[m] has joined #ste||ar
<gnikunj[m]> hkaiser: got the corrections made in. Let me see if I can port the performance tests in the remaining time.
<hkaiser> cool
parsa has quit [Quit: Free ZNC ~ Powered by LunarBNC: https://LunarBNC.net]
parsa has joined #ste||ar
<gonidelis[m]> hkaiser: `reverse` is complete. There was much tweaking needed but now all tests pass. I will take care of the multiple `iter_sent` headers along with providing a uniform `advance` facility in my next PR. Once CI is ok, I think #5225 is ready to go ;))
<hkaiser> gonidelis[m]: great, thanks!
<diehlpk_work> Yeah, we got the Piz Daint propsoal accepted
<hkaiser> wow!
<hkaiser> great news
<diehlpk_work> But only 50% of the requested time
<hkaiser> doesn't matter
<diehlpk_work> and the benchmark paper was accepted as well
<hkaiser> small steps at a time
<diehlpk_work> A good day for Octo-Tiger
<hkaiser> diehlpk_work: your hard work starts to pay off!
<diehlpk_work> Yes, I could run a advertisement agency to push open source codes and stress collaborators
<gonidelis[m]> diehlpk_work: congrats!!
<gonidelis[m]> diehlpk_work: what's the proposal again?
<diehlpk_work> We got plenty of compute time to run some study of stellar merger
<gnikunj[m]> hkaiser: ported for cuda as well. Things seem to work on my laptop. Let me see if I can get things running on rostam.
<diehlpk_work> hkaiser, see pm about the press release
weilewei has joined #ste||ar
<weilewei> hkaiser what will be equivalent c++ code for this CUDA atomicAdd https://github.com/CompFUSE/DCA/blob/master/include/dca/linalg/util/atomic_add_cuda.cu.hpp#L21-L25 ?
<hkaiser> weilewei: std::atomic<>?
<weilewei> hkaiser but it will construct a new std::atomic variable? the cuda code seems only protecting the pointer(or address) operation
<hkaiser> yah, std::atomic_ref<> is not available yet
<gnikunj[m]> weilewei: seems like you're facing the same problems as I do :P
<gnikunj[m]> hkaiser: we'll need to meet once more to discuss the bizarre execution I'm observing.
<gnikunj[m]> I think I know where the problem is. So if you're free on Thursday sometime, let's discus it?
<weilewei> gnikunj[m] lol, do you have solution now?
<gnikunj[m]> kokkos has a solution. That's what I used for the time being.
<hkaiser> gnikunj[m]: ok
<gnikunj[m]> They've got kokkos version of C++ free functions (compare and exchange, test and set etc.)
<gnikunj[m]> hkaiser: let me email Katie to get a time fixed
<weilewei> it seems std::atomic_ref<> is in c++20: https://en.cppreference.com/w/cpp/atomic/atomic_ref
<hkaiser> yes
<weilewei> So no compiler supports it yet?
<weilewei> seems gcc10 supports std::atomic_ref<> https://en.cppreference.com/w/cpp/compiler_support
jehelset has quit [Remote host closed the connection]
<weilewei> so can we use this one to do similar things that cuda atomicAdd does?
weilewei has quit [Quit: Ping timeout (120 seconds)]
<K-ballo> we can implement our own atomic_ref
weilewei has joined #ste||ar
<weilewei> int val = 0; std::atomic<int*> counter { &value }; ++(*counter);
<weilewei> maybe I can do such thing?
<K-ballo> that's not what atomic_ref does, your ++ isn't atomic
<weilewei> Why?
<weilewei> So in order to convert the cuda atomicAdd here: https://github.com/CompFUSE/DCA/blob/master/include/dca/linalg/util/atomic_add_cuda.cu.hpp#L21-L25, we need to implement our own atomic_ref?
<K-ballo> because you are applying ++ to a plain old regular int, not an atomic int
<weilewei> K-ballo I see, any suggestions how to improve that?
<K-ballo> there's no improving atomic<int*>, it will never do what atomic_ref does
<weilewei> int val = 0; std::atomic_ref<int*> counter { &val }; ++(*counter);
<K-ballo> you need an actual atomic_ref
<weilewei> oh typo, how about this one? ^^
<K-ballo> that one is still wrong, still operating on a plain int
<K-ballo> int val = 0; std::atomic_ref<int> counter(val); ++counter;
<K-ballo> you need to operate on the atomic_ref itself to have atomic behavior
<weilewei> nice! I see now val becomes 1
<weilewei> but if I am passing a pointer to val, what can I do?
<K-ballo> atomic_ref<int> won't take a pointer to val
nanmiao has joined #ste||ar
<weilewei> K-ballo how about this code, does that make sense?
<K-ballo> no
<K-ballo> you want an atomic int, not an atomic int pointer
<K-ballo> dereferencing an atomic int pointer yields a plain old non-atomic int
<weilewei> I see... is there any way to fix it, if I can't change the do_count api? which requires passing a pointer but requires atomic add on the input argument?
<K-ballo> sure, std::atomic_ref<int> counter { *value };
<weilewei> great, I understand it now
weilewei has quit [Quit: Ping timeout (120 seconds)]
weilewei has joined #ste||ar
weilewei has quit [Client Quit]
weilewei has joined #ste||ar
nanmiao has quit [Quit: Connection closed]
<tiagofg[m]> hello everyone, a few months ago I asked here about running an hpx program that loads shared libraries made by me, and Kaiser said that it works with --hpx:ini=hpx.component_paths=
<tiagofg[m]> it worked with linux but it didn't with macos
<tiagofg[m]> does this functionality already work with macos?
<tiagofg[m]> my thesis work is built upon dynamic libraries and I would like it to work on macOS also
<tiagofg[m]> do you think that is possible, use --hpx:ini=hpx.component_paths= on macOS? thanks
weilewei has quit [Quit: Ping timeout (120 seconds)]
<bita> hkaiser, does decllow() in blaze make a lower triangular matrix? if not how tril can use of it?
<hkaiser> it marks the argument as triangular, so if you assign the result it will retrive the triangle data only
<bita> Actually I receive an exception when I declare something that is not triangular as triangular. Have tested it?
<hkaiser> the docs says it should work
<bita> Okay, thanks, we will test it further
weilewei has joined #ste||ar
weilewei has quit [Quit: Ping timeout (120 seconds)]
<hkaiser> hmmm, further dow it says it's undefined behavior :/
<hkaiser> down*
<hkaiser> bita: LowerMatrix<M>/UpperMatrix<M> are adaptors that can be used
<hkaiser> LowerMatrix<DynamicMatrix<double>> d(dest); d = rhs;
<hkaiser> dest is a preallocated DynamicMatrix and rhs is the source matrix, after the asignment, dest will have the lower half filled
weilewei has joined #ste||ar
hkaiser has quit [Read error: Connection reset by peer]
hkaiser has joined #ste||ar
weilewei has quit [Quit: Ping timeout (120 seconds)]
<bita> hkaiser, got it, thanks
<gonidelis[m]> hkaiser: what's the difference in replacing `constexpr` with `HPX_HOST_DEVICE` in tag_fallback code?
<gonidelis[m]> ah you don't
<gonidelis[m]> hkaiser: why did you add the HPX_HOST_DEVICE macro then ?
<gonidelis[m]> i guess there is some thing going on with HOST_DEVICE and constexpr
<gonidelis[m]> we tend to couple them?
<hkaiser> gonidelis[m]: it didn't like constexpr variables in device code
<hkaiser> so I had to separate this
nanmiao has joined #ste||ar
<K-ballo> both device and constant evaluation have restrictions on the kind of code they can run, there's no other relation
<gonidelis[m]> hkaiser: K-ballo but all the HOST_DEVICE functions are also constexpr
<gonidelis[m]> in this PR at least
<gonidelis[m]> it's like they go together
<hkaiser> functions yes, but not variables
<K-ballo> the simplest of functions will likely be able to run on both scenarios
<K-ballo> in fact most constexpr code should be able to run on device, but plenty of device code won't run in constexpr
<K-ballo> any relation is incidental
<gonidelis[m]> hkaiser: so you are optimizing these functions by using both HOST_DEVICE and constexpr
<K-ballo> neither of those qualifiers optimize them
<hkaiser> for functions HPX_HOST_DEVICE marks them to be available on host and device
<hkaiser> for variables, one can mark them as __device__, but they are not allowed to be constexpr
<gonidelis[m]> didn't know that. K-ballo constexpr does optimize the code though, isn't it? in the way that it does work on compile time rather than runtim
<gonidelis[m]> e
<K-ballo> no
<K-ballo> it marks the code as available in constant expressions, assuming all the constrains are met
<gonidelis[m]> and if all constraints are met, then what I mentioned above happens
<gonidelis[m]> K-ballo:
<K-ballo> what have you mentioned above?
<gonidelis[m]> <gonidelis[m] "didn't know that. K-ballo conste"> .
<K-ballo> expand?
<K-ballo> a function being available in constant expressions doesn't mean that whenever you call it is evaluated at compile time
<gonidelis[m]> i said: if all constraints are met
<K-ballo> still
<K-ballo> a function will be called at compile time if it's called from a context in which a constant expression is required
<gonidelis[m]> but if it's called on compile time then the runtime is reduced
<K-ballo> any other constant folding optimization can take place with or without `constexpr` (and optimizers don't actually look at constexpr for doing that)
<K-ballo> compile time and run time are disjoint
<K-ballo> if it's called at compile time then it must be called at compile time, then it could never have had any effect in run time
<gonidelis[m]> what
<K-ballo> constexpr int fun() { return 4; }
<K-ballo> std::cout << fun(); // this is a run time call
<gonidelis[m]> because of std::cout
<K-ballo> because the context in which the call happens does not require a constant expression
<gonidelis[m]> <K-ballo "because the context in which the"> because of the std:cout
<gonidelis[m]> because outputting is a runtime thing
<K-ballo> int arr[fun()]; // this is a compilation time call, which could have never been run time
<K-ballo> enum X { enumerator = fun() }; // this is another compilation time call, never could have been run time
<K-ballo> fun(); // now this is a run time call again
<gonidelis[m]> what are you anyway?
<gonidelis[m]> do you have an example that the constexpr does actually matter?
<K-ballo> there's two up here
<K-ballo> `int arr[fun()];` and `enum X { enumerator = fun() };` would be compilation errors without it
<gonidelis[m]> 0.0
<K-ballo> that's what constexpr means: can be used in a constant expression
<gonidelis[m]> get out of here
<gonidelis[m]> i am going to wandbox
<K-ballo> wandbox can't show you anything that'd help
<gonidelis[m]> i want the compilation error
<K-ballo> beware of gnu's vararray extension
<K-ballo> run with -pedantic
<K-ballo> with the extension `int arr[fun()];` will compile, but the call still happens at runtime and this is a run time sized array
<gonidelis[m]> what is your compiler explorer examle for?
<K-ballo> you can see foo() being emitted, and main() actually calling it at run time
<K-ballo> if you change constexpr for C++20's consteval you'll see the difference
<gonidelis[m]> does compiler explorer allow consteval?
<K-ballo> sure, if the compiler you pick does
<K-ballo> note you'll have to specify the corresponding C++20 flag
<gonidelis[m]> which is...?
<K-ballo> uhm, -std=c++20, or -std=c++2a, or /c++:latest ? I think?
<K-ballo> depending on which compiler you pick
<gonidelis[m]> GET OUT OF HERE
<K-ballo> here you go https://godbolt.org/z/K5s9zvPcT
<gonidelis[m]> WHAT'S GOING ON
<gonidelis[m]> yeah jsut did that
<K-ballo> consteval can't be called at runtime at all
<K-ballo> it must always be called at compile time (for unrelated reflection reasons)
<gonidelis[m]> so consteval is sth like "forced constexpr"
<K-ballo> no, you have the wrong idea of "constexpr"
<K-ballo> constexpr means: can be used in a constant expression
<K-ballo> consteval means: can *only* be used in a constant expression
<gonidelis[m]> i thought consteval means: must be used in a const expr
<K-ballo> which is different how?
<K-ballo> I'm ignoring a lot of subtleties and technicalities, which would only make it more confusing
<K-ballo> plain function() -> run time call only
<K-ballo> constexpr function() -> if context requires constant expression compile-time, else run time
<K-ballo> consteval function() -> compile time call only
<gonidelis[m]> so it's "if context requires constant expression ..." and not "if context allows constant expression "
<gonidelis[m]> i am confused with the "allow"s and the "if only"s
<K-ballo> context always allows constant expression
<K-ballo> oh, you mean, if the given arguments are constant expressions and such?
<gonidelis[m]> arguments?
<K-ballo> function arguments
<gonidelis[m]> what i am trying to figure out is if constexpr optimizes if enabled and if not then consteval sure does
<K-ballo> the compile time optimization can be applied regardless of whether the function is constexpr or not
<K-ballo> it's a regular as-if optimization
<K-ballo> there are side effects in attempting to call the function at compile time that make it impossible to "try" and see if a compilation time is possible
<K-ballo> so save from gcc's broken constant propagation implementation, which mixes those levels together (and gets some things wrong as a result), there's no difference
<K-ballo> just turn on optimizations and the call will go away regardless
<gonidelis[m]> if i turn optimizations on then i might as well not need constexprs? (i lost you somewhere between the lines above)
<K-ballo> the optimization that you have in mind can be performed (and has for decades) for regular functions, as long as their definition is visible to the compiler
<gonidelis[m]> it seems to me like you are rendering `constexpr` useless
<K-ballo> constexpr is not about optimizations, is available to call those functions in those contexts in which you couldn't before (at compile time)
<K-ballo> you have the wrong model of constexpr
<K-ballo> you must have thought it was something different than what it actually is
<K-ballo> the examples I gave of array bounds and enumerators are among the main reasons for having `constexpr`
<gonidelis[m]> i know that is "evaluate a function at compile time if possible"
<K-ballo> others being NTTPs and switch cases
<K-ballo> it is not
<gonidelis[m]> i am happy to hear that
<K-ballo> it was always possible to evaluate a function at compile time when possible
<gonidelis[m]> lol
<gonidelis[m]> recursive argument
<K-ballo> well yeah :)
<K-ballo> constexpr is about those contexts that the optimization doesn't reach
<gonidelis[m]> I THOUGHT YOU WERE ADVOCATING THIS WHOLE TIME THAT CONSTEXPR HAS NOTHING TO DO WITH OPTIMIZATIONS
<K-ballo> it isn't
<K-ballo> those contexts aren't even optimizable
<gonidelis[m]> ah ah
<gonidelis[m]> ok!!
<gonidelis[m]> right
<gonidelis[m]> so it's not an optimization, it's just a facilitation
<K-ballo> you can't optimize `case fun():` into working, that's a semantic change
<gonidelis[m]> yy i get it now
<gonidelis[m]> so constexpr is for 1. using functions as rvalues (cases you mentioned) 2. NTTPs and 3. switches
<K-ballo> rvalues?
<gonidelis[m]> yeah in both cases you used fun() in the right side of the = operator
<gonidelis[m]> `int arr[fun()];` fun() is rvalue
<K-ballo> where's the =?
<gonidelis[m]> yeah sorry
<K-ballo> it's about constants, not rvalues
<gonidelis[m]> `enum X { enumerator = fun() }; ` that's the one i was talking about
<K-ballo> contexts that expect a constant
<gonidelis[m]> for a function call to be used as an rvalue it has to be constant
<K-ballo> no, not at all
<K-ballo> function calls have been used as rvalues since the beginning of time
<gonidelis[m]> but you said that this `int arr[fun()];` was not allowed before `constexpr`s
<gonidelis[m]> or at least that's what i got
<K-ballo> that's right
<K-ballo> but if you have `i = fun();` that's a function call being used as an rvalue
<K-ballo> constexpr has nothing to do with rvalues
<gonidelis[m]> why wasn't `enumerator = fun()` allowed then?
<gonidelis[m]> <K-ballo "but if you have `i = fun();` tha"> yeah my bad
<K-ballo> the = after enumerator is not an assignment
<K-ballo> it's just syntax for denoting an enumerator's value
<K-ballo> enumerators aren't objects, they can't be assigned to
<K-ballo> constexpr generalizes constants and constant expressions
<K-ballo> those four examples work just the same for other contexts which require constants: array bounds, nttps, switch cases, ...
<gonidelis[m]> what's the constant expression here?
<gonidelis[m]> i only see `const`s and sconstexpr`s
<K-ballo> 1 is a constant expression
<K-ballo> constant is a constant expression
<K-ballo> 1 + constant is a constant expression
nanmiao has quit [Quit: Connection closed]
<K-ballo> those were all constant expressions since the 98 days
<K-ballo> and since constexpr, now function calls can be too, that's the constexpr_fun() call
<gonidelis[m]> so it generalizes just constant expressions, not constants
<K-ballo> yeah for functions, constexpr + object gives you the generalized constants
<gonidelis[m]> how did you learn the standard so well?
<jaafar> Is there some kind of dashboard for unit tests?
<jaafar> I got some failures - but am not sure which ones I introduced locally :)
<jaafar> I'm going to rerun against master but that will take a few hours. Not sure if there are any expected failures
<K-ballo> I guess forums and mailing lists initially? then following the standardization process, then joining the standardization process
<hkaiser> jaafar: the PR have links
<gonidelis[m]> K-ballo legend says you can name very chapter and subsection in the book
<jaafar> hkaiser: thanks. What is the PR ;)
<jaafar> oh I think I see something!
<gonidelis[m]> Check build-and-test
<gonidelis[m]> tests.unit
<gonidelis[m]> jaafar if that helps at all
<jaafar> I'm running the tests, just not sure which ones are known to be failing
<jaafar> but now I do