hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/ | GSoD: https://developers.google.com/season-of-docs/
<Yorlik> Oh sweet - just discovered I can have abstract virtual constexpr functions ....
<Yorlik> Weird - cppreference says: constexpr functions "it must not be virtual (until C++20)"
<Yorlik> But it compiles and works in my setup (MSVC, c++17)
* Yorlik is scared
hkaiser has quit [Quit: bye]
K-ballo has quit [Quit: K-ballo]
eschnett has quit [Quit: eschnett]
eschnett has joined #ste||ar
nikunj has joined #ste||ar
nikunj has quit [Remote host closed the connection]
nikunj has joined #ste||ar
nikunj has quit [Remote host closed the connection]
<Yorlik> Could anyone explain to me why there is a limit of 2GB on array size in a 64 bit program (windows)? I got this error: https://docs.microsoft.com/en-us/cpp/error-messages/compiler-errors-1/compiler-error-c2148?view=vs-2019
quaz0r has quit [Ping timeout: 246 seconds]
quaz0r has joined #ste||ar
<heller> The data type representing the size says nothing about limitations of the platform
<heller> Why don't you just dynamically allocate this memory?
david_pfander has joined #ste||ar
<zao> Yorlik: This only talks about the size of C-style arrays in types, and automatic and static storage duration.
<zao> There's less restrictions on dynamic storage allocation, as those are not C-style arrays.
david_pfander has quit [Ping timeout: 258 seconds]
K-ballo has joined #ste||ar
<Yorlik> heller, zao: The object containing this array is created with dynamic storage: disruptor::Disruptor<DATA, RB_SIZE>& dis = *( new disruptor::Disruptor<DATA, RB_SIZE> {} );
<zao> Yorlik: "the size of C-style arrays in types"
<zao> `struct S { char arr[9001 << 32]; };` still has a huge array, regardless of method of allocation.
<Yorlik> The array inside this thing is DATA buf_[RB_SIZE]
<zao> Exactly.
<Yorlik> So - should I just use malloc and create a pointer?
<zao> The dynamic allocation we talk about would be `new T[huge]`, as then you never have an actual array object.
<Yorlik> or new
<Yorlik> So - just use raw memory?
<zao> So you could hold an `unique_ptr<T[]>`, or some other smart pointer type.
<Yorlik> There might be cases where smaller arrays a used and you might create the thing just on the stack for speed reasons.
<zao> Or if you want to reduce the number of allocations, overallocate memory and placement new your header object up-front, and treat the rest of the allocation as your payload, making sure to properly free it on destruction.
<Yorlik> Maybe I should just offer both ways to allocate it
<zao> (the latter method would largely rule out stack usage, sadly)
<Yorlik> There a bunch of variables which influence implementation details, I thinbk I should just use a bunch of template parameters with some sane defaults.
<zao> I'm not fully versed in template shenanigans, but you could envisionably have the array either in-line in your object or out-of-line for larger arrays.
<zao> Kind of like how you have small-string-optimization in std::string.
<Yorlik> Yes, that's what I was thinking of
<Yorlik> Just use an enum class to list the possibilities and people choose
<Yorlik> I want to keep the interface as easy as possible, because we want to open source this under MIT once it reaches usable state
<Yorlik> So it should be a nice thing to use
<Yorlik> Basically it behaves like a lockfree queue, though it isn't exactly one. You want it to fit in the cache if possible for speed. But many applications would not allow that.
hkaiser has joined #ste||ar
<Yorlik> zao: fixed with: using RBUF_T = cell_t[L]; /// Ringbuffer( ) : buf_ { (RBUF_T&)*( new RBUF_T ) } ... /// ~Ringbuffer( ) {delete[] &buf_;} ///
<Yorlik> I'll just make the allocation an option (templated constructor and an if in the ~)
<heller> Yorlik: more than 8mb will overflow your stack anyways
<hkaiser> heller: hey
<Yorlik> I'll make allocation strategy a template parameter
<hkaiser> would you have time to follow up on your PR?
<heller> Yes, later today
<hkaiser> thanks
<heller> hkaiser: K-ballo: about the UB of the std::get overload... I'm aware of it, but is there any alternative to it?
<hkaiser> heller: no alternative, pls go ahead with this
<hkaiser> for all intents and purposes (and existing systems) this is safe
<K-ballo> the alternative is not doing it, there's no reason to call std::get on our tuples
<K-ballo> if the goal is to call std::get on our tuples, then let it be ub
<heller> Well, there were quite a few people who intentionally used std::get instead of hpx::util::get during the tutorial
<heller> TBH, I was under the impression that specializing tuple_element and tuple_size was enough to make std::get work. I think that's a deficiency in the specification
<K-ballo> no, there never was an actual tuple-like protocol, nor can `std::get` map from index to member with just `tuple_element` and `tuple_size`
<K-ballo> structured binding comes close, but it looks for member `get` or adl-only `get`, not `std::get`
<heller> I see
<heller> that's something that should be fixed ;)
<hkaiser> heller: go ahead
<K-ballo> how could it ever be fixed?
eschnett has quit [Quit: eschnett]
david_pfander has joined #ste||ar
<K-ballo> heller: check whether `auto [x, y] = util::tuple<int, std::string>();` works
<K-ballo> I suspect we might want to define the std overloads in an internal namespace, then brought them into std:: via using
david_pfander has quit [Ping timeout: 252 seconds]
brett-soric has joined #ste||ar
brett-soric has left #ste||ar [#ste||ar]