This document contains useful information about performance related topics. It will help you make the right modelling choices when using Gentics Mesh.
It is possible to store 200.000 and more nodes within a single node container. We have made tests with more than 2.000.000 nodes in Gentics Mesh. When storing that amount of nodes you should keep the following design suggestions in mind:
The search index is provided by Elasticsearch. Please refer to the Elasticsearch Performance Guide.
Please refrain from using a remotely mounted filesystem for the search index. This would negatively impact the overall search index and node migration performance.
Excerpt from Elasticsearch:
Do not place the search index on a remotely mounted filesystem (e.g. NFS or SMB/CIFS); use storage local to the machine instead.
The search index write performance will degrade in low-memory environments (less then 1024m heap size).
It is advised to apply paging when possible. Without the perPage
parameter all elements will be returned and this could lead to potentially large responses.
Regular paging performance will be impacted by the amount of nodes that are stored within a node container. It is advised to use GraphQL when accessing nodes in this situation. GraphQL will allow you to fine tune the paging information.
The paging performance can be greatly increased by omitting the pageCount and totalCount field. Including these fields will require Gentics Mesh to check all elements for permissions in order to count them.
If you plan to use a caching layer or rely on the browser caching of node requests you can use the ETag value to cache these contents. In other cases it may be better to use the GraphQL endpoint which allows you to only load specific data. GraphQL can possibly also be used to reduce the amount of requests which need to be invoked and thus increase your implementation performance.