-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
document how to use PV in minikube #7828
Comments
I am using MacOs (Darwin 10.14.6) and hyperkit. on v1.8.2 it doesn't happen (i downgraded the minikube and it passed, then upgrade it again, and it failed again). It happened on several Mac machines. maybe related to #3869 |
Hey @liranmauda thanks for opening this issue, it looks like it could be a bug with the storage provisioner. Would you be able to provide the k8s files you applied to the cluster so that I could reproduce this issue? |
Hi @priyawadhwa
Tell me If you need anything more. |
I am running into similar issues. Any updates? |
Running into the same issue trying to use: https://github.com/helm/charts/tree/master/stable/mongodb-replicaset
Help would be appreciated, thank you. |
I don't know if this helps but I was able to fix the problem by changing my persistent volumes. I am now using something like this:
and then you can use the volume in your deployment like this:
|
Update: after about 20 minutes or so, the issue resolved itself. |
@yoavcloud ok great :) |
If someone runs into this, could they please provide the output of |
funny thing that the same deployment works on real k8s cluster |
@tstromberg output from
the line above is repeated 100s of times. Additionally, the following line exists at the tail end of the logs:
|
This is caused by the addition of managedFields in v1.18.0 beta 2 [1]. More details about the issue can be found in kubernetes/kubernetes#89080. I've found that In the first log excerpt that I pasted above, you can see the managedFields cannot be parsed by r2d4's fork of external-storage. But, looking at the source here, it seems like the changes have already been updated. @tstromberg based on your comment in #3628, does [1] https://kubernetes.io/blog/2020/04/01/kubernetes-1.18-feature-server-side-apply-beta-2/ |
tried to use workaround ( minikube config set kubernetes-version v1.16.0 ) or create PV first - neither did help on archlinux / minikube 1.9.2 - will try 1.10 laters |
Hi. I am facing a similar issue in other helm charts. It seems it is something related with setting up the finalizers in the PVC configuration. Using
Hope it can help to identify the issue. |
I have the same in my logs. Is there any update or workaround to this ? |
I have learned that minikube after version 1.8.2 uses a docker driver instead of virtual machine driver by default. And that there saving and restoring of data is not implemented for the related directories yet. So if you start minikube with explicit driver selection as (See #8458) |
I am using hyperkit, and it is not working:
I am still getting:
from the pvc:
|
I get the same with creating an elasticsearch cluster using the elasticsearch operator
Server Version: version.Info {
"Major":"1",
"Minor":"18",
"GitVersion":"v1.18.3",
"GitCommit":"2e7996e3e2712684bc73f0dec0200d64eec7fe40",
"GitTreeState":"clean",
"BuildDate":"2020-05-20T12:43:34Z",
"GoVersion":"go1.13.9",
"Compiler":"gc",
"Platform":"linux/amd64"
}
I'm using the docker driver. I have a mysql deployment with the following PVC and that one gets bound. |
I get you hit straight to the point! The issue is easier if you look at the yaml file, so if you do a f:ownerReferences:
.: {}
k:{"uid":"690cb65e-c608-4995-97ce-68c7eb7ce3a6"}: which if you translate into json, example "f:ownerReferences": {
".": {},
"k:{\"uid\":\"39a5cd2c-ad5d-4915-800d-fb27bc2884da\"}": {
".": {}, Which is valid from json perspective, but it seems It is indeed #7218 |
Hello, I had the same issue while trying to deploy Elasticsearch on Minikube following this guide: This is my configuration: minikube version: v1.11.0 $ kubectl version Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1", GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6", GitTreeState:"clean", BuildDate:"2020-01-14T21:04:32Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} Thank you very much in advance for your help. |
we have an integration test for PV we should ensure that it covers this case |
I'm running into this issue with minikube v1.12.1 running k8s 1.18.3:
It looks like it's due to permissions issues:
You can repro by installing cockroachdb via helm: https://www.cockroachlabs.com/docs/stable/orchestrate-a-local-cluster-with-kubernetes.html |
also experiencing this with any helm chart that requires a PV (redis-ha, rabbitmq-ha, prometheus, grafana). |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I'm having this after upgrading minikube from 1.14.2 to 1.17.0. Using virtualbox. Any PV/PVC doesn't work - default helm charts that used to work, are not working. Tried start/stop/delete and kubernetes version 1.18.15 and 1.20.2 in minikube. Not working ("unbound immediate PersistentVolumeClaims"). Deleting the box and using the same helm charts/values with kubernetes version 1.17.17 on minikube 1.17.0 works. |
Hello. This might not be exactly the same issue, but I wanted to share a potential solution I found that could help. I was trying to use Jenkins with Helm, but the pod kept entering a CrashLoopBackOff state. The issue was resolved by changing the permissions of the directory /tmp/hostpath-provisioner/default/my-jenkins on the pod from 755 to 777. I believe this issue could be resolved by modifying the directory creation permissions from 0755 to 0777 in the following file: https://github.com/kubernetes/minikube/blob/master/cmd/storage-provisioner/main.go |
When I am trying to deploy mongodb on minikube v1.9.2 it fails with:
pod has unbound immediate PersistentVolumeClaims
The text was updated successfully, but these errors were encountered: