I'm currently in touch with a customer who faces poor performance with a LARGE scenario (more than 14'000 coordinates) of a map matching call: It takes several minutes (100-180 seconds) to proceed the request.
Here are my questions:
- Is it realistic to compute such a complexity within a shorter period (half the time or even less)? What do we have to do for this?
- Is it meaningful to slice the 100% of polygon points into several requests that can be processed in parallel? For example to replace [0,..,14000] by [0,....5005],[4995...10005],[9995...14000] and then concatenate the results of [0...5000][5001..10000][10001...14000]? Can we recommend this for larger complexity?
Bernd