I needed to remove the requiered django version due to a security bug. I drop all failing test but perhaps you can figure something better then I did.
~/cvsPortage/gentoo-x86/dev-python/django-compressor $ ebuild django-compressor-1.4.ebuild test Ran 191 tests in 1.508s OK (skipped=1) Creating test database for alias 'default'... Destroying test database for alias 'default'... * python3_3: running distutils-r1_run_phase _clean_egg_info * python3_4: running distutils-r1_run_phase _clean_egg_info * python2_7: running distutils-r1_run_phase _clean_egg_info >>> Completed testing dev-python/django-compressor-1.4 with Installed versions: 1.7.6 You've lost me here. Has django itself been upgraded and is fine? The testsuite looks like something made by someone else in the past and doesn't appear to have had tests selectively excluded. I don't know what's going on here.
Created attachment 398960 [details] build.log Please see the build.log. This happens when removing the test patch.
You mean like so; Ran 180 tests in 2.110s FAILED (errors=29, skipped=2) Creating test database for alias 'default'... Destroying test database for alias 'default'... Seriously I'm in catch up mode here. 28 Feb 2015; Justin Lecher <jlec@gentoo.org> +files/django-compressor-1.4-test.patch, says you added the patch to exclude non-working tests. I bumped it last June and I only presume it passed the testsuite back then, otherwise I'd have made some indication somewhere. Back then I'd have tested against a django-1.6.x setup.py has install_requires=[ 'django-appconf >= 0.4', and django-appconf has RDEPEND=" dev-python/django[${PYTHON_USEDEP}] which is all I can find to stipulate "the requiered django version". Note; https://github.com/django-compressor/django-compressor/issues/595
During test phase of python3.3 got alot of; Traceback (most recent call last): File "/usr/lib64/python3.3/site-packages/urllib3/connectionpool.py", line 516, in urlopen body=body, headers=headers) File "/usr/lib64/python3.3/site-packages/urllib3/connectionpool.py", line 308, in _make_request conn.request(method, url, **httplib_request_kw) File "/usr/lib64/python3.3/http/client.py", line 1065, in request self._send_request(method, url, body, headers) File "/usr/lib64/python3.3/http/client.py", line 1103, in _send_request self.endheaders(body) File "/usr/lib64/python3.3/http/client.py", line 1061, in endheaders self._send_output(message_body) File "/usr/lib64/python3.3/http/client.py", line 906, in _send_output self.send(msg) File "/usr/lib64/python3.3/http/client.py", line 844, in send self.connect() File "/usr/lib64/python3.3/site-packages/urllib3/connection.py", line 141, in connect conn = self._new_conn() File "/usr/lib64/python3.3/site-packages/urllib3/connection.py", line 120, in _new_conn (self.host, self.port), self.timeout, **extra_kw) File "/usr/lib64/python3.3/site-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/lib64/python3.3/site-packages/urllib3/util/connection.py", line 76, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib64/python3.3/site-packages/elasticsearch/connection/http_urllib3.py", line 67, in perform_request response = self.pool.urlopen(method, url, body, retries=False, headers=self.headers, **kw) File "/usr/lib64/python3.3/site-packages/urllib3/connectionpool.py", line 559, in urlopen _pool=self, _stacktrace=stacktrace) File "/usr/lib64/python3.3/site-packages/urllib3/util/retry.py", line 223, in increment raise six.reraise(type(error), error, _stacktrace) File "/usr/lib64/python3.3/site-packages/six.py", line 624, in reraise raise value.with_traceback(tb) File "/usr/lib64/python3.3/site-packages/urllib3/connectionpool.py", line 516, in urlopen body=body, headers=headers) File "/usr/lib64/python3.3/site-packages/urllib3/connectionpool.py", line 308, in _make_request conn.request(method, url, **httplib_request_kw) File "/usr/lib64/python3.3/http/client.py", line 1065, in request self._send_request(method, url, body, headers) File "/usr/lib64/python3.3/http/client.py", line 1103, in _send_request self.endheaders(body) File "/usr/lib64/python3.3/http/client.py", line 1061, in endheaders self._send_output(message_body) File "/usr/lib64/python3.3/http/client.py", line 906, in _send_output self.send(msg) File "/usr/lib64/python3.3/http/client.py", line 844, in send self.connect() File "/usr/lib64/python3.3/site-packages/urllib3/connection.py", line 141, in connect conn = self._new_conn() File "/usr/lib64/python3.3/site-packages/urllib3/connection.py", line 120, in _new_conn (self.host, self.port), self.timeout, **extra_kw) File "/usr/lib64/python3.3/site-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/lib64/python3.3/site-packages/urllib3/util/connection.py", line 76, in create_connection sock.connect(sa) urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionRefusedError(111, 'Connection refused')) elasticsearch: DEBUG: > None urllib3.util.retry: DEBUG: Converted retries value: False -> Retry(total=False, connect=None, read=None, redirect=0) urllib3.connectionpool: INFO: Starting new HTTP connection (398): localhost elasticsearch: WARNING: GET http://localhost:25123/_cluster/health?wait_for_status=yellow [status:N/A request:0.001s] then it ended XML: /mnt/gen2/TmpDir/portage/dev-python/elasticsearch-curator-3.0.0/work/curator-3.0.0/nosetests.xml Name Stmts Miss Cover Missing -------------------------------------------------------------- curator 3 0 100% curator._version 1 0 100% curator.api 12 0 100% curator.api.alias 56 14 75% 28-29, 33, 38-39, 63, 81-90 curator.api.allocation 23 0 100% curator.api.bloom 31 2 94% 21-22 curator.api.close 15 0 100% curator.api.delete 17 0 100% curator.api.filter 119 0 100% curator.api.opener 14 0 100% curator.api.optimize 35 2 94% 38-39 curator.api.replicas 18 0 100% curator.api.show 5 0 100% curator.api.snapshot 50 0 100% curator.api.utils 132 10 92% 24-31, 75-76, 256 curator.cli 14 0 100% curator.cli.alias 13 4 69% 14-17 curator.cli.allocation 13 4 69% 14-17 curator.cli.bloom 8 0 100% curator.cli.cli 47 22 53% 12-17, 52-83 curator.cli.close 7 0 100% curator.cli.delete 11 0 100% curator.cli.index_selection 64 45 30% 45-108 curator.cli.opener 7 0 100% curator.cli.optimize 10 0 100% curator.cli.replicas 12 4 67% 13-16 curator.cli.show 9 0 100% curator.cli.snapshot 20 4 80% 38-41 curator.cli.snapshot_selection 60 40 33% 45-100 curator.cli.utils 109 67 39% 36, 39-45, 49, 52, 79-91, 98-133, 157-182 -------------------------------------------------------------- TOTAL 935 218 77% ---------------------------------------------------------------------- Ran 137 tests in 17.990s OK (SKIP=1) * python3_3: running distutils-r1_run_phase _clean_egg_info >>> Completed testing dev-python/elasticsearch-curator-3.0.0 appearing to have actually completed. -rw-rw---- 1 testuser portage 1539050 Mar 17 18:03 /mnt/gen2/TmpDir/portage/dev-python/elasticsearch-curator-3.0.0/temp/build.log 1.5 mb of build log has alot of Traceback (most recent call last):. ALOT
sorry wrong bug
dev-python/django-compressor $ ebuild django-compressor-1.4.ebuild clean test ---------------------------------------------------------------------- Ran 180 tests in 1.906s FAILED (errors=29, skipped=2) for dev-python/django-1.7.7 however they pass fine in 1.6.11, meaning the patch isn't even required for <dev-python/django-1.7. Having 4 distinct major version in one package, even with 1 masked, is an invitation for confusion. At a glance you could resolve it by purging django-1.5* which beckons the question; is it then sane to also purge =dev-python/django-1.4*? This would allow the use of the patch which, according to the runtest I just ran, means it applies for 1.7 but is not required for 1.6 which means it can be used with the realisation it is wiping out around 29 perfectly passable tests in 1.6.11. What fine grained qa we have. Having a version of a key dep set behind test? ( ) I have already run into before and it drove our main qa participant in the python project to even further distraction. It messes up repoman and therefore the qa guy. Making a patch conditional on the actual installed version of the key dep django to my understanding means scanning the system which breaks ebuild writing protocol. Even if it were allowed it's heavy handed. There is no clean solution, only ugly ones. I am not an actual django user. The staffer who is or was is no longer an active participant in g-python use(d) django. From memory the current python lead also doesn't know the first thing about django however I still suggest you ask him about purging django 1.4 * 1.5. They are old and I for one have no inbuilt loyalty to them. But then, nor does the python team to my knowledge.
(In reply to Ian Delaney from comment #6) > beckons the question; is it then sane to also purge =dev-python/django-1.4*? As long upstream maintains version 1.4 we should keep it around.
django 1.4 has been purged a while ago.