Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Out of Heap Memory in main_index Process #52

Open
tcrst opened this issue Sep 25, 2024 · 1 comment
Open

Out of Heap Memory in main_index Process #52

tcrst opened this issue Sep 25, 2024 · 1 comment

Comments

@tcrst
Copy link

tcrst commented Sep 25, 2024

The main_index process is running out of heap memory even after increasing `--max-old-space-size to 64GB; what is the recommended memory allocation and any further optimization suggestions?

[program:main_index]
command=node --max-old-space-size=65536 index.js
directory=/OPI/modules/main_index
autostart=true
autorestart=true
stderr_logfile=/var/log/opi/main_index.err.log
stdout_logfile=/var/log/opi/main_index.out.log
user=root
environment=NODE_OPTIONS="--max-old-space-size=65536"

Also getting this in the logs

#
# Fatal error in , line 0
# Fatal JavaScript invalid size error 169220804 (see crbug.com/1201626)
#
#
#
#FailureMessage Object: 0x7ffdf0e7aaf0
----- Native stack trace -----

 1: 0xd3f611  [node]
 2: 0x217b8b1 V8_Fatal(char const*, ...) [node]
 3: 0x10c6568  [node]
 4: 0x12a571f  [node]
 5: 0x12a58ba  [node]
 6: 0x151fdf6 v8::internal::Runtime_GrowArrayElements(int, unsigned long*, v8::internal::Isolate*) [node]
 7: 0x770a1bcd9ef6
Debugger listening on ws://127.0.0.1:9229/7e322332-fae8-4761-b5da-2730df3f077f
For help, see: https://nodejs.org/en/docs/inspector
@samedcildir
Copy link
Contributor

This problem usually occurs when the main indexer or ord fails during indexing somehow and fills the log_file.txt over and over again with same block commands. Probably the size of log_file.txt has grown a lot on your node and currently main_index.js needs to read all of that file to memory in order to work.

Even though it is possible to fix the file, we currently do not have a fix script. You can remove the repeating blocks from the file by hand, or you can restart the indexing but set the memory flag to something around or bigger than 16GB.

Finally, I suggest using the restore.py script, it'll download the latest snapshot from S3 and start from there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants