KEMBAR78
Sorry, you need to login first to access this service. · Issue #12 · GiusTex/EdgeGPT · GitHub
Skip to content
This repository was archived by the owner on Oct 2, 2023. It is now read-only.
This repository was archived by the owner on Oct 2, 2023. It is now read-only.

Sorry, you need to login first to access this service. #12

@Simplegram

Description

@Simplegram

I have a login issue with EdgeGPT. After doing a clean install and updating EdgeGPT extension to 0.6.8, the issue temporarily disappear and then started appearing again. I have looked into #5 but I don't really know where to log in next. I have logged in to https://bing.com, Microsoft Edge, and Windows 11. Below is my oobabooga log. I removed a few OOM errors, let me know if that can also be a factor. I changed the activation word to Bing in the last two interactions.

Starting arguments: --auto-devices --chat --xformers --sdp-attention --extensions EdgeGPT long_term_memory openai

Models tested:

  • TheBloke/guanaco-33B-GPTQ
  • TheBloke/guanaco-13B-GPTQ
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: einops in e:\projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages (0.6.1)
WARNING:A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
bin E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cuda117.dll
INFO:Loading settings from settings.json...
INFO:Loading the extension "EdgeGPT"...

Thanks for using the EdgeGPT extension! If you encounter any bug or you have some nice idea to add, write it on the issue page here: https://github.com/GiusTex/EdgeGPT/issues
INFO:Loading the extension "long_term_memory"...

-----------------------------------------
IMPORTANT LONG TERM MEMORY NOTES TO USER:
-----------------------------------------
Please remember that LTM-stored memories will only be visible to the bot during your NEXT session. This prevents the loaded memory from being flooded with messages from the current conversation which would defeat the original purpose of this module. This can be overridden by pressing 'Force reload memories'
----------
LTM CONFIG
----------
change these values in ltm_config.json
{'ltm_context': {'injection_location': 'BEFORE_NORMAL_CONTEXT',
                 'memory_context_template': "{name2}'s memory log:\n"
                                            '{all_memories}\n'
                                            'During conversations between '
                                            '{name1} and {name2}, {name2} will '
                                            'try to remember the memory '
                                            'described above and naturally '
                                            'integrate it with the '
                                            'conversation.',
                 'memory_template': '{time_difference}, {memory_name} said:\n'
                                    '"{memory_message}"'},
 'ltm_reads': {'max_cosine_distance': 0.6,
               'memory_length_cutoff_in_chars': 1000,
               'num_memories_to_fetch': 2},
 'ltm_writes': {'min_message_length': 100}}
----------
-----------------------------------------
INFO:Loading the extension "openai"...
INFO:Loading the extension "gallery"...
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.

Loaded embedding model: all-mpnet-base-v2, max sequence length: 384
Starting OpenAI compatible api:
OPENAI_API_BASE=http://127.0.0.1:5001/v1
INFO:Loading TheBloke_guanaco-33B-GPTQ...
INFO:Found the following quantized model: models\TheBloke_guanaco-33B-GPTQ\Guanaco-33B-GPTQ-4bit.act-order.safetensors
INFO:Replaced attention with xformers_attention
INFO:Loaded the model in 76.28 seconds.

Output generated in 7.84 seconds (2.68 tokens/s, 21 tokens, context 1715, seed 928831495)
Output generated in 13.31 seconds (7.89 tokens/s, 105 tokens, context 1830, seed 1511671731)
`OOM error`
Output generated in 27.42 seconds (9.96 tokens/s, 273 tokens, context 1934, seed 742048406)
Output generated in 13.05 seconds (5.44 tokens/s, 71 tokens, context 2045, seed 880538633)
`OOM error`
Output generated in 3.79 seconds (4.48 tokens/s, 17 tokens, context 1993, seed 1744823770)
`OOM error`
Output generated in 7.28 seconds (4.81 tokens/s, 35 tokens, context 2019, seed 777958075)
Output generated in 6.27 seconds (4.46 tokens/s, 28 tokens, context 2022, seed 1894157071)
Traceback (most recent call last):
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\gradio\routes.py", line 414, in run_predict
    output = await app.get_blocks().process_api(
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\gradio\blocks.py", line 1323, in process_api
    result = await self.call_function(
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\gradio\blocks.py", line 1067, in call_function
    prediction = await utils.async_iteration(iterator)
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\gradio\utils.py", line 339, in async_iteration
    return await iterator.__anext__()
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\gradio\utils.py", line 332, in __anext__
    return await anyio.to_thread.run_sync(
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\gradio\utils.py", line 315, in run_sync_iterator_async
    return next(iterator)
  File "E:\Projects\cpy\llms\textgen-webui-gpu\text-generation-webui\modules\chat.py", line 327, in generate_chat_reply_wrapper
    for i, history in enumerate(generate_chat_reply(text, shared.history, state, regenerate, _continue, loading_message=True)):
  File "E:\Projects\cpy\llms\textgen-webui-gpu\text-generation-webui\modules\chat.py", line 321, in generate_chat_reply
    for history in chatbot_wrapper(text, history, state, regenerate=regenerate, _continue=_continue, loading_message=loading_message):
  File "E:\Projects\cpy\llms\textgen-webui-gpu\text-generation-webui\modules\chat.py", line 230, in chatbot_wrapper
    prompt = apply_extensions('custom_generate_chat_prompt', text, state, **kwargs)
  File "E:\Projects\cpy\llms\textgen-webui-gpu\text-generation-webui\modules\extensions.py", line 193, in apply_extensions
    return EXTENSION_MAP[typ](*args, **kwargs)
  File "E:\Projects\cpy\llms\textgen-webui-gpu\text-generation-webui\modules\extensions.py", line 80, in _apply_custom_generate_chat_prompt
    return extension.custom_generate_chat_prompt(text, state, **kwargs)
  File "E:\Projects\cpy\llms\textgen-webui-gpu\text-generation-webui\extensions\EdgeGPT\script.py", line 171, in custom_generate_chat_prompt
    asyncio.run(EdgeGPT())
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\asyncio\runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\asyncio\base_events.py", line 649, in run_until_complete
    return future.result()
  File "E:\Projects\cpy\llms\textgen-webui-gpu\text-generation-webui\extensions\EdgeGPT\script.py", line 157, in EdgeGPT
    bot = await Chatbot.create()
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\EdgeGPT.py", line 634, in create
    await _Conversation.create(self.proxy, cookies=cookies),
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\EdgeGPT.py", line 411, in create
    raise NotAllowedToAccess(self.struct["result"]["message"])
EdgeGPT.NotAllowedToAccess: Sorry, you need to login first to access this service.

image
image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions