I am noticing large performance costs when doing operations with `Time`

(as compared to `DateTime`

). One example would be

```
using Dates
n = now();
arr_dt = [n for _ in 1:10^7];
arr_ts = Time.(arr_dt);
all(arr_dt .== arr_dt); all(arr_ts .== arr_ts);
@time all(arr_dt .== arr_dt)
@time all(arr_ts .== arr_ts)
```

On Julia-1.1.1 yields the following

```
0.011878 seconds (10 allocations: 1.197 MiB)
1.261891 seconds (10 allocations: 1.197 MiB)
```

From `Dates.Time`

sources I understand that equality is implemented by comparing hours, minutes, etc components. What is the rationale for this approach when `Time`

itself wraps a single `Int`

of nanoseconds? Wouldn’t modulo arithmetic suffice in this design? What would you advice to use to have faster time comparisons? Thanks!

It appears you are benchmarking in the global scope. If so, you are very unlikely to see meaningful results.

My bad, thank you for pointing this out. However, with code wrapped in functions (attached below) results are pretty much the same, so my initial questions stand.

```
using Dates
n = now();
arr_dt = [n for _ in 1:10^7];
arr_ts = Time.(arr_dt);
f_dt() = all(arr_dt .== arr_dt)
f_ts() = all(arr_ts .== arr_ts)
f_dt(); f_ts();
@time f_dt();
@time f_ts();
```

```
0.011563 seconds (10 allocations: 1.197 MiB)
1.271418 seconds (10 allocations: 1.197 MiB, 3.69% gc time)
```

It might very well be that this case could be optimized. If you care about the performance of this you could try to implement a faster `==`

for `DateTime`

add some extra test and make a PR to Julia with it. Then everyone will benefit from it, instead of just having a local workaround.