KEMBAR78
Anyon technologies backend integration by Lambdauv · Pull Request #2191 · NVIDIA/cuda-quantum · GitHub
Skip to content

Conversation

@Lambdauv
Copy link
Contributor

@Lambdauv Lambdauv commented Sep 4, 2024

I have read the Contributor License Agreement and I hereby accept the Terms.

Description

Lambdauv and others added 30 commits August 22, 2024 20:02
…into "id_token", "refresh_token" for compatibility with anyon server.py.; 2. in Job request, anyon server.py will respond an array of [responseJSON, http_status_code]. Consequently, the cudaq helper .cpp will need to take just the first element of the get/post job request response for JSON data extraction. changes were made in Helper.cpp to be consistent with this.
…e in anyon's server.py is an array of [responseDataJSON, http_code]. This change fixes the inconsistency by asking the results only take the first element of the postJobResponse
…chitectures and corresponding naitive gate sets.
…structions to allow CudaQ compilation for target QPU naitive gate set and connectivity topology. Passed Tests requiring less qubits than the qubit count specified in the telegraph-8q.txt
…gh qubits to run the last test. But the results of the last test failed as the results are inconsistent with teh expectation.
…gate pass only works with naitive 2-qubit gate restricted to x(1) for the ctests included.
@Lambdauv
Copy link
Contributor Author

Lambdauv commented Sep 5, 2024

llvm::encodeBase64()

Just

I definitely ran the bash scripts/run_clang_format.sh before commit and like the last time it didn't really change anything, which still leads to the error over format checking this time.

Ah, I think the reason it is not running on the your base64.hpp file is the script is not processing .hpp files (primarily because do not have any regular .hpp files).
Feel free to run clang-format -i runtime/cudaq/platform/default/rest/helpers/anyon/base64.hpp manually, but as mentioned, I think it would be preferrable to use llvm::decodeBase64() and llvm::encodeBase64() if possible.
(PS - you can revert 085ece7 as that had no effect.)

Thanks, just did that and it indeed changed the base64.hpp. Thank a lot for the help! I will switch to llvm::encodeBase64 in the next PR as this was validated for working with our remote REST server already.

as the last attempt didn't work....i just switched to llvm::encodeBase64 to make the format checking bot happy.

@bmhowe23
Copy link
Collaborator

bmhowe23 commented Sep 5, 2024

Sorry for the swirl on this @Lambdauv. Can you see the details of the failure on this page: https://github.com/NVIDIA/cuda-quantum/actions/runs/10724371201? It shows the exact files that that are causing the failures, along with the changes that are needed.

It is currently failing on Python formatting now, not C++:
image

This can be remedied by applying the patch provided there, or by running yapf -i --style google utils/mock_qpu/anyon/__init__.py locally.

@Lambdauv
Copy link
Contributor Author

Lambdauv commented Sep 5, 2024

Sorry for the swirl on this @Lambdauv. Can you see the details of the failure on this page: https://github.com/NVIDIA/cuda-quantum/actions/runs/10724371201? It shows the exact files that that are causing the failures, along with the changes that are needed.

It is currently failing on Python formatting now, not C++: image

This can be remedied by applying the patch provided there, or by running yapf -i --style google utils/mock_qpu/anyon/__init__.py locally.

thanks for pointing out, updated the python script as suggested

@bmhowe23
Copy link
Collaborator

bmhowe23 commented Sep 5, 2024

/ok to test

Command Bot: Processing...

@github-actions
Copy link

github-actions bot commented Sep 5, 2024

CUDA Quantum Docs Bot: A preview of the documentation can be found here.

github-actions bot pushed a commit that referenced this pull request Sep 5, 2024
@bmhowe23
Copy link
Collaborator

bmhowe23 commented Sep 10, 2024

/ok to test

Command Bot: Processing...

Copy link
Collaborator

@bmhowe23 bmhowe23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Lambdauv - this looks great, thank you! Just a few minor comments below. Hopefully we can get this merged today or tomorrow. Thanks!

github-actions bot pushed a commit that referenced this pull request Sep 11, 2024
@github-actions
Copy link

CUDA Quantum Docs Bot: A preview of the documentation can be found here.

@bmhowe23
Copy link
Collaborator

bmhowe23 commented Sep 11, 2024

/ok to test

Command Bot: Processing...

github-actions bot pushed a commit that referenced this pull request Sep 11, 2024
@github-actions
Copy link

CUDA Quantum Docs Bot: A preview of the documentation can be found here.

@bmhowe23 bmhowe23 merged commit 34a1d35 into NVIDIA:main Sep 11, 2024
129 checks passed
@github-actions github-actions bot locked and limited conversation to collaborators Sep 11, 2024
@bettinaheim bettinaheim changed the title first anyon technologies backend integration pull request Anyon technologies backend integration Nov 19, 2024
@bettinaheim bettinaheim added the enhancement New feature or request label Nov 19, 2024
@bettinaheim bettinaheim added this to the release 0.9.0 milestone Nov 19, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants