Skip to main content

Memberlist Errors

Symptoms

Pod logs show:
  • No private IP address found, and explicit IP not provided
  • (*Memberlist).UpdateStatus(0x0, 0x0, 0x0, 0x1)

Solution

Set memberlist.useAddrRef to true for QueryCoordinator and Spatial-API. Via values file:
memberlist:
  useAddrRef: true
Via Helm:
--set memberlist.useAddrRef=true

Data Installation Errors

Symptoms

  • Pod logs show: ...is not a valid Global Knowledge Repository path
  • InstallManager pod never completes
  • Spatial-API pods fail to start

Solutions

1. Verify installation completed Check InstallManager logs for:
Completed installing the data packs.
Datapack installation was successfull.
Edit complete
If missing, data installation didn’t finish. 2. Check storage space
kubectl logs installmanager-xxxxx -n loqate | grep "Space available"
If insufficient:
  • Increase PV size
  • Install subset of datasets (see Quick Start)
3. Retry installation Delete and reinstall InstallManager:
helm delete installmanager -n loqate
helm install installmanager loqate/installmanager \
  --set licenseKey=your-key \
  # ... other settings
4. Install minimal dataset Create silent.txt with fewer countries, test, then expand.

Request Processing Errors

Symptoms

API requests fail with:
  • No spatialapi available
  • Failed to process

Solution: Restart Pods

Check pod status:
kubectl get pods -n loqate
If pods show 0/1 (not ready): Delete the pod - Kubernetes will recreate it:
kubectl delete pod pod-name -n loqate
Wait 3 minutes and test again. If all pods show 1/1 but errors persist: Restart in this order:
  1. Delete Spatial-API pod
  2. Wait 3 minutes, test
  3. If still failing, delete QueryCoordinator pod
  4. Wait 3 minutes, test
  5. If still failing, delete Spatial-API pod again
kubectl delete pod spatial-api-xxxxx -n loqate
# Wait 3 minutes, test
kubectl delete pod querycoordinator-xxxxx -n loqate
# Wait 3 minutes, test

Solution: Reinstall Charts

If pod restarts don’t work: 1. List releases:
helm list -n loqate
2. Delete failing release:
helm delete release-name -n loqate
3. Reinstall (see Configuration for commands) 4. Verify pods are running:
kubectl get pods -n loqate
Wait 3 minutes after pods show 1/1, then test.

Connection Issues

Port Forward Not Working

Symptoms:
  • Connection refused on localhost:8900
  • Timeout errors
Solutions:
  1. Verify port forward is running (command doesn’t terminate)
  2. Check QueryCoordinator pod is ready:
kubectl get pods -n loqate
# querycoordinator should show 1/1
  1. Try different local port:
kubectl port-forward -n loqate svc/querycoordinator 9000:8900
# Then test localhost:9000
  1. Check firewall not blocking localhost

Image Pull Errors

Symptoms

Pods show ImagePullBackOff or ErrImagePull

Solutions

1. Verify Docker Hub credentials:
kubectl get secret -n loqate
# Should show registry credentials
2. Test Docker Hub access:
docker login
# Use your Docker Hub credentials
3. Recreate image pull secret:
kubectl delete secret regcred -n loqate
kubectl create secret docker-registry regcred \
  --docker-server=https://index.docker.io/v1/ \
  --docker-username=your-username \
  --docker-password=your-password \
  -n loqate
4. Verify Loqate repository access Contact your Loqate representative to confirm your Docker Hub ID has been granted access.

Performance Issues

Slow Response Times

Check resource limits:
kubectl describe pod pod-name -n loqate | grep -A 5 "Limits"
If CPU/memory limits are being hit:
  • Increase resource limits in values file
  • See Configuration for sizing guidelines
Check storage performance: Slow storage impacts response times. Consider:
  • Faster storage class (SSD over HDD)
  • Local storage over network storage

High Memory Usage

Normal for large datasets. Ensure appropriate limits:
  • Standard datasets: 4-8Gi
  • Premium US: 8-16Gi

Collecting Diagnostic Information

If issues persist, collect diagnostic data before contacting support:
# Pod logs
kubectl logs pod-name -n loqate > pod-logs.txt

# Pod description
kubectl describe pod pod-name -n loqate > pod-describe.txt

# Events
kubectl get events -n loqate --sort-by='.lastTimestamp' > events.txt

# Resource usage
kubectl top pods -n loqate > resource-usage.txt

Contact Support

If issues persist after trying these solutions, contact [email protected] with:
  • Description of the issue
  • Steps already attempted
  • Diagnostic logs
  • Kubernetes version and environment details

Next Steps