How does the space complexity notation change with input type?

1 week ago 16
ARTICLE AD BOX

It depends mostly on what you want to discuss in your analysis.

Most algorithm analysis assumes the RAM model, where each item is constant-sized (the size of a pointer) and operations on single items take constant time. This is of course an abstraction, but usually good enough - it does perfectly fit your number[] case for example, as in JS every number takes 64 bits1.

Such abstractions are leaky, and it would no longer hold in languages like Python or Haskell where integer values may have arbitrary precision. There, you can choose to ignore this (typically with the argument that for real-world workloads, you can assume some constant upper bound) or explicitly take the size of the integers into account.

For most algorithms that operate on strings, like your second example, it is usually a good idea to model their average or maximum length as a variable, as many string operations depend on this2. There are many more metrics available, input size is just one of them - you can also take into account distribution, frequency, or sortedness of values. These would make sense for complexity analysis of hashmap operations or sorting, but not for your example of just cloning a list.

1: depending on the concrete engine implementation. Some numbers may be optimised down to 32 bit, some may be stored on the heap and you have to account for an additional pointer. In either case, there's a constant upper bound.
2: which operations that are depends on the string implementation, again an engine internal for JavaScript. Copying a string may copy its contents, or just copy a reference. Even concatenation may need to create a copy, or just build some sort of linked list.

Read Entire Article