cascading - how to avoid filling up hadoop logs on nodes? -


when our cascading jobs encounter error in data, throw various exceptions… these end in logs, , if logs fill up, cluster stops working. have config file edited/configured avoid such scenarios?

we using mapr 3.1.0, , looking way limit log use (syslogs/userlogs), without using centralized logging, without adjusting logging level, , less bothered whether keeps first n bytes, or last n bytes of logs , discords remain part.

we don't care logs, , need first (or last) few megs figure out went wrong. don't wan't use centralized logging, because don't want keep logs/ don't care spend perf overhead of replicating them. also, correct me if i'm wrong: user_log.retain-size, has issues when jvm re-use used.

any clue/answer appreciated !!

thanks,

srinivas

this should in different stackexchange site it's more of devops question programming question.

anyway, need devops setup logrotate , configure according needs. or edit log4j xml files on cluster change way logging done.


Comments

Popular posts from this blog

javascript - RequestAnimationFrame not working when exiting fullscreen switching space on Safari -

Python ctypes access violation with const pointer arguments -