python - Why is list+set method of making a list unique faster than dictionary keys method? -


here sample of timeit trial same:

>>> import timeit >>> setup = """ ... random import randint ... rand_list = [randint(0,10) in range(0,10000)] ... """  >>> timeit.timer('list(set(rand_list))', setup=setup).repeat(5, 1000) [0.17256593704223633, 0.17117094993591309, 0.17115998268127441, 0.17191100120544434, 0.17226791381835938] >>> timeit.timer('{ x:true x in rand_list}.keys()', setup=setup).repeat(5, 1000) [0.4490840435028076, 0.44455599784851074, 0.442918062210083, 0.4430229663848877, 0.44559407234191895] 

as can see, list(set(my_list)) method approximately 2.5 times faster dictionary method, result similar smaller lists or bigger lists.

can please explain why i.e. difference in functionality of execution of both these steps in terms of time complexity ?

you running python loop on 10000 items in second test, in dictionary comprehension; loop slows down.

you try dict.fromkeys() instead:

dict.fromkeys(rand_list).keys() 

this creates dictionary rand_list values values set none.

this slightly slower now:

>>> import timeit >>> random import randint >>> rand_list = [randint(0,10) in range(0,10000)] >>> timeit.timer('list(set(rand_list))', setup='from __main__ import rand_list').repeat(5, 1000) [0.1437511444091797, 0.13837504386901855, 0.13841795921325684, 0.1395130157470703, 0.1474599838256836] >>> timeit.timer('dict.fromkeys(rand_list).keys()', setup='from __main__ import rand_list').repeat(5, 1000) [0.18216991424560547, 0.17930316925048828, 0.18064308166503906, 0.17971301078796387, 0.17820501327514648] 

that's expected; dict() more work track keys , values, opposed keys in set.


Comments

Popular posts from this blog

javascript - RequestAnimationFrame not working when exiting fullscreen switching space on Safari -

Python ctypes access violation with const pointer arguments -