sean imap1.de.opalstack.com

The imap server address differs as to what is mentioned on help page (mail.de.opalstack.com). Does the help page need an update ?

  • sean replied to this.

    @sean does this not import the email contents ? i ran the script successfully and the email was created but the existing emails were not import.

    • sean replied to this.

      snaushads The imap server address differs as to what is mentioned on help page (mail.de.opalstack.com). Does the help page need an update ?

      The migration tool is not an email client so our email client configuration instructions aren't applicable to it.

      snaushads @sean does this not import the email contents ? i ran the script successfully and the email was created but the existing emails were not import.

      Correct, as stated in the OP:

      ⚠️ The email migrator will not migrate your email messages that are stored on WebFaction's mail server. To copy your mail over please refer to: Importing Email from Another Provider

      sean Is it possible to add a way to auto-include subdomains?

      e.g. to set include_domains = *.example.com instead of specifying all 50 subdomains

      • sean replied to this.

        miopa I've logged a feature request for it, will let you know if it happens.

        I think one email address confused the migration script:

        2020-11-26 17:05:43 : DEBUG : performing POST mail/add/ with {'source': 'bounce+05345.345e84-arhiva=email.dostava24.mk@email.dostava24.mk', 'destinations': [], 'forwards': []}
        2020-11-26 17:05:43 : DEBUG : Starting new HTTPS connection (1): my.opalstack.com:443
        2020-11-26 17:05:45 : DEBUG : https://my.opalstack.com:443 "POST /api/v0/mail/add/ HTTP/1.1" 400 67
        2020-11-26 17:05:45 : DEBUG : {"error": "invalid source address or domain is on another account"}
        2020-11-26 17:05:45 : DEBUG : got resp: <Response [400]> and result {'error': 'invalid source address or domain is on another account'}
        Traceback (most recent call last):
          File "/usr/local/bin/wfmigrate", line 71, in <module>
            migrator.run()
          File "/opt/app_migrators-1.6.2/wf_generic_email/migrator.py", line 70, in run
            self.migrate()
          File "/opt/app_migrators-1.6.2/wf_generic_email/migrator.py", line 52, in migrate
            self.wf_migrator_email.create_addresses()
          File "/opt/app_migrators-1.6.2/lib/wf_migrator_email.py", line 154, in create_addresses
            created = self.opal_cfg.api.addresses.ensure(needed)
          File "/opt/app_migrators-1.6.2/lib/opalapi.py", line 155, in ensure
            created = self.create(tocreate, wait=wait)
          File "/opt/app_migrators-1.6.2/lib/opalapi.py", line 143, in create
            for item in tocreate: create_item(item)
          File "/opt/app_migrators-1.6.2/lib/opalapi.py", line 135, in create_item
            create_result = self.api.post(self.model + '/add/', item, ensure_status=[200])
          File "/opt/app_migrators-1.6.2/lib/opalapi.py", line 510, in post
            return self.request_result(urlpath, dataObj, 'POST', ensure_status)
          File "/opt/app_migrators-1.6.2/lib/opalapi.py", line 503, in request_result
            resp, result = self.request(urlpath, dataObj, method, ensure_status)
          File "/opt/app_migrators-1.6.2/lib/opalapi.py", line 499, in request
            raise RuntimeError(f'Unexpected status_code: {resp.status_code}')
        RuntimeError: Unexpected status_code: 400
        • sean replied to this.

          miopa you can't add plus addresses like that on our side.

          If you use bounce@mail.dostava24.mk for the address then mail sent to bounce+05345.345e84-arhiva=email.dostava24.mk@email.dostava24.mk will be delivered as expected.

          Is there a way to pause email receipt so as to allow moving all the old email over and switching the DNS servers ? Or is these days the DNS switch clean wrt. email, ie. no emails will arrive at the old server after the DNS switch ? It's been a while I did this.

          • sean replied to this.

            Axel there will always be a bit of propagation time, never really a clean switch. You usually have to watch mail at both ends until propagation is done, then do a final mailbox sync to get the Opalstack side caught up.

            Curious as to what you mean by "pause email receipt" - what would you expect to happen to a message that was sent to your domain in this paused state?

            • Axel replied to this.

              sean ok so I should first change the DNS servers, then wait a day, and then move the old mail (I'm planning to use the zipped maildir method), assuming that is what you mean with "mailbox sync" ?

              email is a "store-and-forward" system, right ? So I was thinking that email would be held at some other server (for at most a day) until it is accepted by the Opalstack server where I'd read it from.

              • sean replied to this.

                Axel sean ok so I should first change the DNS servers, then wait a day, and then move the old mail (I'm planning to use the zipped maildir method), assuming that is what you mean with "mailbox sync" ?

                You could do it that way, or you could let us load your zipped maildirs in advance and then you can follow up with a quick mbsync after the switch.

                Axel email is a "store-and-forward" system, right ? So I was thinking that email would be held at some other server (for at most a day) until it is accepted by the Opalstack server where I'd read it from.

                The thing is, who is going to store it?

                If the goal is to pause mail (ie don't attempt to deliver it to any MX server during DNS propagation) you'd have to configure every SMTP server that sends mail to your domain to defer delivery until some future date. If that's not done, then the mail is going to arrive at whichever MX the SMTP server picks up in DNS.

                • Axel replied to this.

                  sean

                  I think something has broken with the migrator (or maybe WF?).

                  I just ran it (with simulate) on the WF account/domain that I ran originally. Nothing has changed in the file.

                  Here's the readout:

                  2020-11-28 02:51:14 : INFO : Simulation started
                  2020-11-28 02:51:16 : WARNING : Email address EMAILADDRESSHERE points to nonexistent or excluded mailbox MAILBOXNAME
                  Traceback (most recent call last):
                    File "/usr/local/bin/wfmigrate", line 71, in <module>
                      migrator.run()
                    File "/opt/app_migrators-1.6.2/wf_generic_email/migrator.py", line 61, in run
                      self.simulate()
                    File "/opt/app_migrators-1.6.2/wf_generic_email/migrator.py", line 30, in simulate
                      self.wf_migrator_email.create_addresses(simulate=True)
                    File "/opt/app_migrators-1.6.2/lib/wf_migrator_email.py", line 137, in create_addresses
                      toretain, todelete, tocreate = self.opal_cfg.api.addresses.check_ensure(needed)
                    File "/opt/app_migrators-1.6.2/lib/opalapi.py", line 58, in check_ensure
                      existing = self.get_all()
                    File "/opt/app_migrators-1.6.2/lib/opalapi.py", line 166, in get_all
                      for existing_summary in self.api.get(self.model + '/list/', ensure_status=[200])[self.model + 's']
                    File "/opt/app_migrators-1.6.2/lib/opalapi.py", line 507, in get
                      return self.request_result(urlpath, None, 'GET', ensure_status)
                    File "/opt/app_migrators-1.6.2/lib/opalapi.py", line 503, in request_result
                      resp, result = self.request(urlpath, dataObj, method, ensure_status)
                    File "/opt/app_migrators-1.6.2/lib/opalapi.py", line 499, in request
                      raise RuntimeError(f'Unexpected status_code: {resp.status_code}')
                  RuntimeError: Unexpected status_code: 400

                  That initial 2020-11-28 02:51:16 : WARNING makes total sense. I got it before and it's accurate.

                  But it's also how I know (I think) that the migrator IS connecting to WF. So something is happening after that, as the migrator tries to complete one of it's tasks.

                  Of the top of my head: Maybe OS has hit a rate-limit?


                  sean please run the sim with --loglevel debug added to the command and then amend your original post with the output, thanks.

                  I think this is the relevant portion:

                  2020-11-28 03:26:51 : DEBUG : performing GET mail/list/
                  2020-11-28 03:26:51 : DEBUG : Starting new HTTPS connection (1): my.opalstack.com:443
                  2020-11-28 03:26:52 : DEBUG : https://my.opalstack.com:443 "GET /api/v0/mail/list/ HTTP/1.1" 400 18
                  2020-11-28 03:26:52 : DEBUG : {"error": "error"}
                  2020-11-28 03:26:52 : DEBUG : got resp: <Response [400]> and result {'error': 'error'}

                  It seems to get through the requests to GET /api/v0/domain/read/UUIDHERE HTTP/1.1.

                  But then errors out when it starts the requests to GET /api/v0/mail/list/ HTTP/1.1.

                  After that, it's the same Traceback as above.


                  @sean If you need to see everything (i.e.: directly from my account) you know where to find it ;-) Or let me know and I can open a ticket.

                  • sean replied to this.

                    aa11 please run the sim with --loglevel debug added to the command and then amend your original post with the output, thanks.

                    • aa11 replied to this.

                      I'm getting the following error when trying to copy over a single domain:

                      2020-11-28 09:21:49 : DEBUG : performing GET domain/list/
                      2020-11-28 09:21:49 : DEBUG : Starting new HTTPS connection (1): my.opalstack.com:443
                      2020-11-28 09:21:50 : DEBUG : https://my.opalstack.com:443 "GET /api/v0/domain/list/ HTTP/1.1" 200 97
                      2020-11-28 09:21:50 : DEBUG : {"domains": [{"id": "XXXXXX-86af-4b5b-a79a-f43329321bd4", "name": "username.opalstacked.com"}]}
                      2020-11-28 09:21:50 : DEBUG : got resp: <Response [200]> and result {'domains': [{'id': 'XXXXXX-86af-4b5b-a79a-f43329321bd4', 'name': 'username.opalstacked.com'}]}
                      2020-11-28 09:21:50 : DEBUG : performing GET domain/read/XXXXXX-86af-4b5b-a79a-f43329321bd4
                      2020-11-28 09:21:50 : DEBUG : Starting new HTTPS connection (1): my.opalstack.com:443
                      2020-11-28 09:21:50 : DEBUG : https://my.opalstack.com:443 "GET /api/v0/domain/read/XXXXXX-86af-4b5b-a79a-f43329321bd4 HTTP/1.1" 200 82
                      2020-11-28 09:21:50 : DEBUG : {"id": "XXXXXX-86af-4b5b-a79a-f43329321bd4", "name": "username.opalstacked.com"}
                      2020-11-28 09:21:50 : DEBUG : got resp: <Response [200]> and result {'id': 'XXXXXX-86af-4b5b-a79a-f43329321bd4', 'name': 'username.opalstacked.com'}
                      2020-11-28 09:21:50 : DEBUG : performing GET domain/list/
                      2020-11-28 09:21:50 : DEBUG : Starting new HTTPS connection (1): my.opalstack.com:443
                      2020-11-28 09:21:51 : DEBUG : https://my.opalstack.com:443 "GET /api/v0/domain/list/ HTTP/1.1" 200 97
                      2020-11-28 09:21:51 : DEBUG : {"domains": [{"id": "XXXXXX-86af-4b5b-a79a-f43329321bd4", "name": "username.opalstacked.com"}]}
                      2020-11-28 09:21:51 : DEBUG : got resp: <Response [200]> and result {'domains': [{'id': 'XXXXXX-86af-4b5b-a79a-f43329321bd4', 'name': 'username.opalstacked.com'}]}
                      2020-11-28 09:21:51 : DEBUG : performing GET domain/read/XXXXXX-86af-4b5b-a79a-f43329321bd4
                      2020-11-28 09:21:51 : DEBUG : Starting new HTTPS connection (1): my.opalstack.com:443
                      2020-11-28 09:21:52 : DEBUG : https://my.opalstack.com:443 "GET /api/v0/domain/read/XXXXXX-86af-4b5b-a79a-f43329321bd4 HTTP/1.1" 200 82
                      2020-11-28 09:21:52 : DEBUG : {"id": "XXXXXX-86af-4b5b-a79a-f43329321bd4", "name": "username.opalstacked.com"}
                      2020-11-28 09:21:52 : DEBUG : got resp: <Response [200]> and result {'id': 'XXXXXX-86af-4b5b-a79a-f43329321bd4', 'name': 'username.opalstacked.com'}
                      2020-11-28 09:21:52 : INFO : Creating domains: [{'name': 'example.com'}]
                      2020-11-28 09:21:52 : DEBUG : performing POST domain/add/ with [{'name': 'example.com'}]
                      2020-11-28 09:21:52 : DEBUG : Starting new HTTPS connection (1): my.opalstack.com:443
                      2020-11-28 09:21:53 : DEBUG : https://my.opalstack.com:443 "POST /api/v0/domain/add/ HTTP/1.1" 400 52
                      2020-11-28 09:21:53 : DEBUG : [{"name":["Domain with this name already exists."]}]
                      2020-11-28 09:21:53 : DEBUG : got resp: <Response [400]> and result [{'name': ['Domain with this name already exists.']}]
                      Traceback (most recent call last):
                        File "/usr/local/bin/wfmigrate", line 71, in <module>
                          migrator.run()
                        File "/opt/app_migrators-1.6.2/wf_generic_email/migrator.py", line 70, in run
                          self.migrate()
                        File "/opt/app_migrators-1.6.2/wf_generic_email/migrator.py", line 51, in migrate
                          self.wf_migrator_email.create_domains()
                        File "/opt/app_migrators-1.6.2/lib/wf_migrator_email.py", line 132, in create_domains
                          created = self.opal_cfg.api.domains.ensure(needed)
                        File "/opt/app_migrators-1.6.2/lib/opalapi.py", line 155, in ensure
                          created = self.create(tocreate, wait=wait)
                        File "/opt/app_migrators-1.6.2/lib/opalapi.py", line 140, in create
                          create_result = self.api.post(create_url, tocreate, ensure_status=[200])
                        File "/opt/app_migrators-1.6.2/lib/opalapi.py", line 510, in post
                          return self.request_result(urlpath, dataObj, 'POST', ensure_status)
                        File "/opt/app_migrators-1.6.2/lib/opalapi.py", line 503, in request_result
                          resp, result = self.request(urlpath, dataObj, method, ensure_status)
                        File "/opt/app_migrators-1.6.2/lib/opalapi.py", line 499, in request
                          raise RuntimeError(f'Unexpected status_code: {resp.status_code}')
                      RuntimeError: Unexpected status_code: 400

                      Update: I realised it was to do with my domain 'example.com' being registered on two different Opalstack accounts (I have two accounts, one is a testing account, one is a live account). Once I deleted the domain from my testing account, it worked.

                      I sent a ticket about this - I don't see why I can't have a domain name registered to two different opalstack accounts.

                      I just ran it again on the exact same config file.

                      It seems to no longer be getting tripped up on GET /api/v0/mail/list/ HTTP/1.1. It finds conflicting forwards and notes:

                      2020-11-28 16:27:34 : WARNING : Address ADDRESS@EXAMPLE.com will be DELETED!

                      Which is expected. (I assume, in these cases, the version that exists at OS will be deleted, only the be immediately recreated by migrate ??)

                      But then it runs into another error:

                      2020-11-28 16:27:37 : INFO : Creating mailusers: []
                      Traceback (most recent call last):
                        File "/usr/local/bin/wfmigrate", line 71, in <module>
                          migrator.run()
                        File "/opt/app_migrators-1.6.2/wf_generic_email/migrator.py", line 61, in run
                          self.simulate()
                        File "/opt/app_migrators-1.6.2/wf_generic_email/migrator.py", line 36, in simulate
                          needed = self.wf_migrator_email.generate_needed_domains()
                        File "/opt/app_migrators-1.6.2/lib/wf_migrator_email.py", line 90, in generate_needed_domains
                          for address in self.generate_needed_addresses()
                        File "/opt/app_migrators-1.6.2/lib/wf_migrator_email.py", line 90, in <setcomp>
                          for address in self.generate_needed_addresses()
                      AttributeError: 'NoneType' object has no attribute 'split'

                      I ran it with --loglevel debug and it seems that the error occurs after migrate is done looping through the final mailuser in its queue.

                      This is the final readout (with --loglevel debug) before the error above:

                      2020-11-28 16:33:06 : DEBUG : performing GET mailuser/read/UUID-HERE
                      2020-11-28 16:33:06 : DEBUG : Starting new HTTPS connection (1): my.opalstack.com:443
                      2020-11-28 16:33:06 : DEBUG : https://my.opalstack.com:443 "GET /api/v0/mailuser/read/UUID-HERE HTTP/1.1" 200 186
                      2020-11-28 16:33:06 : DEBUG : {"id": "UUID-HERE", "name": "MAILUSERNAME", "pending_deletion": false, "ready": true, "imap_server": "UUID-2-HERE", "init_created": true}
                      2020-11-28 16:33:06 : DEBUG : got resp: <Response [200]> and result {'id': 'UUID-HERE', 'name': 'MAILUSERNAME', 'pending_deletion': False, 'ready': True, 'imap_server': 'UUID-2-HERE', 'init_created': True}

                      Thanks for the fix on the first part. Sorry if you're in the middle of working on this and you already know about this error.

                      • sean replied to this.

                        aa11 do you have an email address defined at WF that has no local part (ie the part before @domain.com)?

                        • aa11 replied to this.

                          sean do you have an email address defined at WF that has no local part (ie the part before @domain.com)?

                          That's a good thought, but not that I can see.

                          I ran migrate the other day and it was fine. I'm pretty sure I've only made minor changes to the emails, at WF, since then.

                          • I added one address (at WF) to test out migrate (via do and it was created at OS as expected).
                          • I've made no changes to the domain specified as include_domains =
                          • sean replied to this.

                            aa11 Please check your addresses on in your Opalstack dashboard and see if you have a blank one defined here. If you find one, please delete it and try the migration again.

                              sean

                              That was it!!

                              There was a blank address at Opalstack.

                              Not sure how it got that way. But I know why the issue started happening. I ran (with do) migrate on a different domain yesterday. It was successful. It seems to have both correctly brought over Forward A but also created a blank version (local part) as well.

                              I'm pretty sure I couldn't have created that, because I don't think the CP allows that. So I guess, for some reason the migrator was allowed to sneak that in somehow?

                              I'm sorry I don't have the logs from that run of migrate.

                              In case it helps, this was the format of the address, and forward, in question:
                              a.b@example.com -> c+d@anotherdomain.com

                              I'd suspect the . or +. Except other email addresses, with the same format, were successfully created by migrate at OS (i.e.: Without creating the blank email address).


                              tl;dr: If you're reading this and have similar issues, make sure you have no email addresses, at Opalstack or WebFaction, with a BLANK local part.

                              Thanks @sean !!

                              Mastodon