How does one use Tensorflow's OpOutputList? -


on github, opoutputlist initialized so:

opoutputlist outputs; op_requires_ok(context, context->output_list("output",&outputs)); 

and tensors added this:

tensor* tensor0 = nullptr; tensor* tensor1 = nullptr; long long int sz0 = 3; long long int sz1 = 4; ... op_requires_ok(context, outputs.allocate(0, tensorshape({sz0}), &tensor0)); op_requires_ok(context, outputs.allocate(1, tensorshape({sz1}), &tensor1)); 

i'm assuming opoutputlist opinputlist in jagged arrays allowed.

my question is, how opoutputlist work? segfaults can't access first index when use eigen::tensor::flat() because don't understand how allocation works can't pinpoint error.

many thanks.

opoutputlist object simple value object containing 2 integers - start , end indices of op outputs contained in list. being simple value objects, create them on stack, no "allocation" required.

you allocate tensors logically belong opoutputlist other tensor. using allocate_output(). here implementation of opoutputlist::allocate:

status opoutputlist::allocate(int i, const tensorshape& shape,                               tensor** output) { dcheck_ge(i, 0); dcheck_lt(i, stop_ - start_); return ctx_->allocate_output(start_ + i, shape, output); } 

as can see checks index i indeed within opoutputlist , call allocate_output.


Comments

Popular posts from this blog

python Tkinter Capturing keyboard events save as one single string -

android - InAppBilling registering BroadcastReceiver in AndroidManifest -

javascript - Z-index in d3.js -