org_data_transfer
Building an export:
turn on extract database:
assume-role -e p -- aws rds start-db-instance --db-instance-identifier mpdx-api-extract-prodreset transfer database
run-locally -e s \ --docker-args "\ -e DATADOG_TRACE=false\ -e DB_ENV_POSTGRESQL_DB=mpdx_extract\ -e DB_ENV_POSTGRESQL_USER=mpdx_extract_admin\ -e DB_ENV_POSTGRESQL_PASS=REDACTED\ -e DB_PORT_5432_TCP_ADDR=mpdx-api-extract-prod.ctzggtk79wff.us-east-1.rds.amazonaws.com" \ -p false bashstart data transfer:
prod_consoleOrganization::Restorer.restore("05df083a-6836-4148-a15a-ae6d72d00749")
wait for sidekiq batch to finish
Dump transfer database to compressed file:
pg_dump -Fc \ -h mpdx-api-extract-prod.ctzggtk79wff.us-east-1.rds.amazonaws.com \ -f ./transfer-test1.dmp \ -U mpdx_extract_admin \ -d mpdx_extractTransfer all s3 files to transfer bucket:
Organization::Restorer.new("05df083a-6836-4148-a15a-ae6d72d00749").restore_attachments(progress_bar: true)Sync all files from transfer bucket to a local directory:
assume-role -e p -- aws s3 sync s3://mpdx-transmit-production docker_data/s3Zip dir:
tar -cf - docker_data/s3 | xz -1 -c - -> docker_data/s3.tar.xzturn off extract database:
assume-role -e p -- aws rds stop-db-instance --db-instance-identifier mpdx-api-extract-prod
Run MPDX locally with export
pg restore:
git checkout public-dockerfileStart the api database server:
docker-compose up -d dbCreate database:
createdb -h localhost -p 54322 -U postgres mpdx_extractRestore data to database:
pg_restore -c -d mpdx_extract -h localhost -p 54322 -U postgres --no-privileges --no-owner ./transfer-test.dmp
build mpdx_api docker image:
bin/public_build.sh(requires SIDEKIQ_CREDS to be set on host env)
run mpdx api:
docker-compose up
run mpdx_web:
cd mpdx_webnpm installAPI_URL=http://localhost:50000/api/v2 OAUTH_URL=http://auth.lvh.me:50000/auth/user npm run start
Last updated