Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize converter performance #250

Closed
kaqqao opened this issue May 1, 2019 · 0 comments
Closed

Optimize converter performance #250

kaqqao opened this issue May 1, 2019 · 0 comments

Comments

@kaqqao
Copy link
Member

kaqqao commented May 1, 2019

Continuing the work from #194.

Many converters currently call expensive reflective operations at execution time, usually while deriving the types for delegation. E.g. when a list is returned, a converter will kick in to apply the conversion on each element. To do this, it first needs to derive the element type from the generic type of the list. This is an easy but expensive operation, performed each time the corresponding field is resolved.

In order to avoid such expensive calls at execution time a new interface is introduced, DelegatingOutputConverter, that declares the getDerivedTypes(AnnotatedType) method. This method will be used to preemptively derive the types at schema initialization time. The result is then cached and made available to the converter via ResolutionEnvironment#getDerived(AnnotatedType) at query execution time. This strategy significantly cuts the performance overhead.

In addition, because the derived types are now known ahead of time, it becomes possible to preemptively prune the list of applicable output converters. This means that only the bare minimum number of converters per field will ever get invoked.

Combined application of these two strategies reduced the performance cost of converters to nearly 0 (as opposed to the current 200% in many cases).

Furthermore, a similar strategy can be employed for input converters, where the types can be derived only during initialization. Since the input converters are wrapped into custom Jackson/Gson deserializers, most of the heavy lifting (e.g. caching) is performed by those libraries, so the optimization work is much easier there. The only semi-significant difference from the case explained above is that with the input converters the type derivation is performed upon the first ever invocation of the converter, not at schema initialization time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant