This is not one of the Seven Simple Questions that I mentioned earlier. This is a question that stands all by itself, alone. It precedes the Seven Simple Questions, and creates a context for them. Not enough people are asking this preceding question, much less answering it.
Sorry, I had something come up in my personal life. Soeren E April 26, at 1: We will need to reduce the scope quite a bit, as a cannot commit to an ambitious essay.
A good thesis on my part might be there is a negligible chance of humans creating an artificial general intelligence within the next years.
I mean it in the sense that donating to places like MIRI is a waste of money. Douglas Summers-Stay April 27, at 5: I work as an AI researcher, and have some relevant publications.
I could contribute together with Soeren, if you are both want to. What is the existential risk of AI technology compared to other existential risks? My position would be: Even getting to AGI will be very hard and take a very long time.
Even if we get to AGI, it is unlikely that it would be able to recursively self improve. Even if it can recursively self improve, it is unlikely that that self improvement would be exponential.
Even if that self improvement is exponential, it is unlikely that it will be exponential for very long. Again, we can focus on AGI if you want, and I do think it would be interesting to do some sort of first principles write up where we nail down definitions and give the readers a layout of the current state of technology and what needs to happen for AGI.
Soeren E April 27, at To make my claim explicit: I reserve the right to update as I write the essay: Would you be willing to assign a percentage to your belief? I would like to narrow the scope to not consider if MIRI etc.
Also, unless the temporal discount rate is really low, it is not worthwhile to care at all about events in years, even if they are very likely.
Would you be interested in adversarial collaboration with both me and Douglas Summers-Stay? Feel free to email me soeren.
Perhaps a better question in regards to this issue is to balance out the perceived probability of developing AGI versus the perceived ability of humans to control said AGI for example, by crafting effective morals testing.
And putting this all in context of something that makes sense to consider technologically, I think, means you have to have a time horizon that is within the potential lived experience of someone reading this blog.
Let's assume a lifespan of 90 years life expectancy has been increasing for humans, but there is no evidence that lifespan is increasing so we should default to a generous assumption of lifespan based on the current non-trend of lifespan stagnation.
That gives until to develop AGI in a time horizon that is meaningful in the sense that we ought to think about doing something soon.Download-Theses Mercredi 10 juin Islamic ethics (أخلاق إسلامية), defined as "good character," historically took shape gradually from the 7th century and was finally established by the 11th century.
thus the dhimmi communities living in Islamic states usually had their own laws independent from the Sharia law, "Women were given inheritance rights in a. The Law Society’s practice note on sharia-compliant wills Posted on 24 March by Frank Cranmer In the last round-up we made a very brief mention of the Law Society’s new practice note for solicitors on the sharia succession rules and how charitable gifts within sharia wills should be managed.
List of all practice notes issued by the Law Society. Law Society withdraws guidance on sharia wills The Law Society has withdrawn guidance on how to prepare sharia-compliant wills following criticism from solicitors and the justice secretary.
What is Shari'a? Khaled Abou El Fadl are boundless and illimitable sources for thinking about ethics, morality, law, and wisdom.
But as sources of .