Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Max pods per node should probably force-new #2084

Closed
andor44 opened this issue Sep 20, 2018 · 3 comments
Closed

Max pods per node should probably force-new #2084

andor44 opened this issue Sep 20, 2018 · 3 comments
Assignees
Labels

Comments

@andor44
Copy link

andor44 commented Sep 20, 2018

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
  • If an issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to "hashibot", a community member has claimed the issue already.

Terraform Version

» terraform -v
Terraform v0.11.8
+ provider.google v1.18.0
+ provider.helm (unversioned)
+ provider.kubernetes v1.2.0
+ provider.random v2.0.0

Affected Resource(s)

  • google_container_cluster
  • google_container_node_pool

Terraform Configuration Files

resource "google_container_node_pool" "foo" {
  name              = "foo"
  cluster           = "${google_container_cluster.cluster.name}"
  # max_pods_per_node = 123 or even omitting the field
  # ...
}

Debug Output

Can provide if needed, it'd take me some time to sanitize it.

Panic Output

No panic.

Expected Behavior

max_pods_per_node should probably force a new resource. Most likely this field can not be changed on Google's side because that'd require changing the underlying instance group/template.

Actual Behavior

At the moment the Google API is not rejecting requests to attempt to modify it, but it seems to silently discard the change. If I omit the field I get an update-in-place on all of my plans/applies:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ module.kubernetes.google_container_node_pool.foo
      max_pods_per_node: "110" => "0"

  ~ module.kubernetes.google_container_node_pool.bar
      max_pods_per_node: "110" => "0"

  ~ module.kubernetes.google_container_node_pool.baz
      max_pods_per_node: "110" => "0"

This goes through fine, i.e. I get Apply complete! Resources: 0 added, 3 changed, 0 destroyed. every time, but it comes up again on every plan.

Steps to Reproduce

  1. Have a node pool that was created pre-1.18 of this provisioner (so the field in terraform state is still empty)
  2. Update provisioner to 1.18
  3. terraform plan/apply, rinse and repeat

Important Factoids

This is a Beta feature that's not yet well (or really, at all) documented on Google's side so it might be premature to make it force new. While very unexpected, Google might make this change-able.

References

@ghost ghost added the bug label Sep 20, 2018
@leejones
Copy link

I think this is related to (and possibly solved by) #2077.

@andor44
Copy link
Author

andor44 commented Sep 26, 2018

To be honest I'm not 100% sure what computed actually means. Does marking a field computed mean that you can still change it without having to recreate the resource? If so, I'm not sure that's enough. As I mentioned in the opening message I do not think this field is intended to be changeable, or at the very least the GCP API seems to discard modifications to it.

@danawillow danawillow self-assigned this Oct 1, 2018
danawillow added a commit that referenced this issue Oct 2, 2018
I don't see any way of updating in the API, and even if there were, we aren't calling it right now.

Fixes #2084
@ghost
Copy link

ghost commented Nov 16, 2018

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@hashicorp hashicorp locked and limited conversation to collaborators Nov 16, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants