did i got it right:
- the input for decoder (jfif data, ffd8->ffd9) should be in V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE
- the output (decoded color romponents, like yuv or rgb) should appear in V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE, when the decoder is done
???
that doesn't seem very logical to me
most of examples, like those from stm and samsung, even m2m ones (where you can specify input file + output file) - if adopted and compiled for rpi2w, tell me that "video dev does not support capture". i believe that is because these examples do not support this new multi-plane api stuff... or maybe rpi2w's hw is just not capable of solving my task?
now trying to adopt some mplane code from libav, still no luck
- the input for decoder (jfif data, ffd8->ffd9) should be in V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE
- the output (decoded color romponents, like yuv or rgb) should appear in V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE, when the decoder is done
???
that doesn't seem very logical to me
most of examples, like those from stm and samsung, even m2m ones (where you can specify input file + output file) - if adopted and compiled for rpi2w, tell me that "video dev does not support capture". i believe that is because these examples do not support this new multi-plane api stuff... or maybe rpi2w's hw is just not capable of solving my task?
now trying to adopt some mplane code from libav, still no luck
Statistics: Posted by andrey-bts — Sat Apr 27, 2024 2:59 pm