android - Battery Power Consumption between C/Renderscript/Neon Intrinsics -- Video filter (Edgedetection) APK -


i have developed 3 c/rs/neon-intrinsics versions of video processing algorithm using android ndk (using c++ apis renderscript). calls c/rs/neon made native level on ndk side java front end. found reason neon version consumes lot of power in comparison c , rs versions. used trepn 5.0 power testing.

  1. can 1 clarify me regarding power consumption level each of these methods c , renderscript - gpu, neon intrinsics. 1 consumes ?

  2. what ideal power consumption level rs codes ?, since gpu runs less clock frequency , power consumption must less!

  3. does renderscript apis focuses on power optimization ?

video - 1920x1080 (20 frames)

  1. c -- 11115.067 ms (0.80mw)
  2. rs -- 9867.170 ms (0.43mw)
  3. neon intrinsic -- 9160 ms (1.49mw)

first, power consumption of render script code dependent on type of soc, frequency/voltages @ cpus, gpus operate etc.

even if @ cpus same vendor, arm instance a15s , a9s, a15s cpus more power hungry compared a9. similarly, mali gpu4xx versus 6xx exhibits power consumption differences same task. in addition, power deltas exist between different vendors, instance, intel , arm cpus, doing same task. similarly, 1 notice power differences between qcom adreno gpu , arm mali gpu, if operating @ same frequency/voltage levels.

if use nexus 5, got quad a15 cpu cranking @ 2.3g speed per cpu. renderscript pushes cpus , gpus highest clock speed. on device, expect power consumption of rs code based on cpu/neon or cpu highest depending on type of operations doing , followed rs gpu code. bottomline, on power consumption, type of device using matters lot due differences in socs use. in latest generation of socs out there, expect cpus/neon more power hungry gpu.

rs push cpu/gpu clock frequency highest possible speed. not sure if 1 meaningful power optimizations here. if do, power savings miniscule compared power consumed cpus/gpu @ top speed.

this power consumption such huge problem on mobile devices, fine power consumption angle filters processing few frames in computational imaging space. moment 1 renderscript in real video processing, device gets heated lower video resolutions, , os system thermal managers come play. these thermal managers reduce overall cpu speeds, causing unreliable performance cpu renderscript.

responses comments

frequency alone not cause of power consumption. combination of frequency , voltage. instance, gpu running @ 200 mhz @ 1.25v, , 550 mhz @ 1.25v consume same power. depending on how power domains designed in system, 0.9v should enough 200mhz , system should in theory transision gpu power domain lower voltage when frequency comes down. various socs have various issues 1 cannot guarantee consistent voltage , frequency transition. 1 reason behind why gpu power high nominal loads.

so whatever, complexities, if holding gpu voltage @ 1.25v@600 mhz, power consumption pretty high , comparable to of cpus cranking @ 2g@1.25v...

i tested neon intrinsic - 5x5 convolve , pretty fast (3x-5x) compared not using cpus same task. neon hardware in same power domain cpus (aka mpu power domain). cpus held @ voltage/frequency when neon hardware working. since neon performs faster given task cpu, wouldn't surprised if consumes more power relatively cpu task. has give if getting faster performance - power.


Comments

Popular posts from this blog

javascript - RequestAnimationFrame not working when exiting fullscreen switching space on Safari -

Python ctypes access violation with const pointer arguments -